I'm using mongoose to deal with my database.
I have the following model:
var DeviceSchema = mongoose.Schema({
type: Number,
pushId: String
});
The type attribute can be either 0 or 1.
I want to execute a query that grab all documents and retrieve the result in the following format:
{
fstType: [
{
_id: "545d533e2c21b900000ad234",
type: 0,
pushId: "123"
},
{
_id: "545d533e2c21b900000ad235",
type: 0,
pushId: "124"
},
],
sndType: [
{
_id: "545d533e2c21b900000ad236",
type: 1,
pushId: "125"
},
{
_id: "545d533e2c21b900000ad237",
type: 1,
pushId: "126"
},
]
}
Is that possible? I want to do that in one single query.
Thanks.
Is that possible? I want to do that in one single query.
Yes. It is possible. You can achieve the desired result, through the following aggregation pipeline operations.
Sort by the type parameter in ascending order.
Group records together having the same type, construct an array of
documents for each group. After this stage, only two records will be
present, each with an attribute called items, which is an array of
records for each group.
Since our records are sorted by type, the first group will contain
records with type 0, and the second with type 1.
At last we merge the groups and give them each a name, based on their type.
var model = mongoose.model('collection',DeviceSchema);
model.aggregate([
{$sort:{"type":-1}},
{$group:{"_id":"$type",
"items":{$push:"$$ROOT"},
"type":{$first:"$type"}}},
{$project:{"items":{$cond:[{$eq:["$type",0]},
{"firstType":"$items"},
{"secondType":"$items"}]}}},
{$group:{"_id":null,
"firstType":{$first:"$items.firstType"},
"secondType":{$last:"$items.secondType"}}},
{$project:{"_id":0,"firstType":1,"secondType":1}}
], function (err, result) {
if (err) {
console.log(err);
return;
}
console.log(result);
});
o/p:
{ firstType:
[ { _id: '545d533e2c21b900000ad234', type: 0, pushId: '123' },
{ _id: '545d533e2c21b900000ad235', type: 0, pushId: '124' } ],
secondType:
[ { _id: '545d533e2c21b900000ad236', type: 1, pushId: '125' },
{ _id: '545d533e2c21b900000ad237', type: 1, pushId: '126' } ] }
Related
I am trying to send a list of total paid and unpaid client with count along with data from my node API.
In mongoose method, I am stuck at thinking how to go further.
can anyone suggest the best way to achieve this?
router.get("/", ensureAuthenticated, (req, res) => {
Loan.aggregate([
{
$match: {
ePaidunpaid: "Unpaid"
}
}
]).then(function(data) {
console.log(data);
res.render("dashboard", { admin: req.user.eUserType, user: req.user,data:data });
});
});
Loan Model:
const Loan = new Schema({
sName: { type: String },
sPurpose: [String],
sBankName: String,
sBranchName: [String],
nTotalFees: { type: Number },
ePaidunpaid: { type: String ,default:'Unpaid'},
sCashOrCheque: { type: String },
});
Outcome:
Details of a user with a count of paid and unpaid clients
[
Paid:{
// Paid users
},
Unpaid:{
// Unpaid Users
},
]
Well in that case, try this -
Loan.aggregate([
{
$group: {
_id: "$ePaidunpaid",
data: { $push: "$$ROOT" },
count: { $sum: 1 }
}
}
]);
Output would be something like this -
{
"_id": "Paid",
"data": [
// All the documents having ePaidunpaid = Paid
{ _id: "asdasd123 1eqdsada", sName: "Some name", // Rest of the fields },
{ _id: "asdasd123 1eqdsada", sName: "Some name", // Rest of the fields }
],
count: 2
},
{
"_id": "Unpaid",
"data": [
// All the documents of having ePaidunpaid = Unpaid
{ _id: "asdasd123 1eqdsada", sName: "Some name", // Rest of the fields },
{ _id: "asdasd123 1eqdsada", sName: "Some name", // Rest of the fields }
],
count: 2
},
Explanation
First stage of the pipeline $group groups all the documents according to ePaidunpaidfield which only have two values Paid or Unpaid thus rendering only two documents respectively.
Next step is to accumulate original data (documents) being grouped together. This is achieved using $push accumulator on data field, pushing $$ROOT which effectively references the document currently being processed by pipeline stage.
Since you needed count of all paid and unpaid users hence a $sum accumulator to count all the items in each group.
I have a use case where the database is modelled like this:
name: XYZ
gradeCards: [{
id: 1234, // id of the report card
comments: ['GOOD','NICE','WOW']
}, {
id: 2345,
comments: ['GOOD','NICE TRY']
}]
Now, I have a query that I would like to query the schema as follows:
I would be given a list of ids and values.
For example: the given list is as follows:
[{
id: 1234,
comments: ['GOOD','NICE']
},{
id: 2345,
comments: ['GOOD']
}]
In short the ID should be matching and the comments should be a sub-array of the comments array for that id and also, all the conditions specified in the array should be matched, so it should be an AND condition on all the conditions provided.
I was able to get to this query, but it matches all of the elements in the comments array, I want that id should be exactly matched and comments should be a subarray.
For matching all elements in comments array:
db.getCollection('user').find({arr:{
$all: [{
id:1234,
comments:['GOOD','NICE','WOW']
},{
id:2345,
comments:['GOOD','NICE TRY']
}]
}})
You can try $all with $elemMatch to match on the query conditions.
db.collection.find({
gradeCards: {
$all: [{
"$elemMatch": {
id: 1234,
comments: {
$in: ['GOOD', 'NICE']
}
}
}, {
"$elemMatch": {
id: 2345,
comments: {
$in: ['GOOD']
}
}
}, ]
}
})
I don't really know how to frame the question but what I have is the following schema in mongoose
new Schema({
gatewayId: { type: String, index: true },
timestamp: { type: Date, index: true },
curr_property:Number,
curr_property_cost:Number,
day_property:Number,
day_property_cost: Number,
curr_solar_generating: Number,
curr_solar_export:Number,
day_solar_generated:Number,
day_solar_export:Number,
curr_chan1:Number,
curr_chan2:Number,
curr_chan3:Number,
day_chan1:Number,
day_chan2:Number,
day_chan3:Number
},{
collection: 'owlelecmonitor'
});
and I want to be able to query all the documents in the collection but the data should be arranged inside the array in the following format
[ [{
gatewayId: 1,
timestamp: time
....
},
{
gatewayId: 1,
timestamp: time2
....
}],
[{
gatewayId: 2,
timestamp: time
....
},
{
gatewayId: 2,
timestamp: time2
....
}],
[{
gatewayId: 3,
timestamp: time
....
},
{
gatewayId: 3,
timestamp: time2
....
}]
];
Is there a way that I can do this in mongoose instead of retrieving the documents and processing them again ?
Yes, it's possible. Consider the following aggregation pipeline in mongo shell. This uses a single pipeline stream comprising of just the $group operator, grouping all the documents by gatewayId and creating another array field that holds all the grouped documents. This extra field uses the accumulator operator $push on the system variable $$ROOT which returns the root document, i.e. the top-level document, currently being processed in the aggregation pipeline stage.
With the cursor returned from the aggregate() method, you can then use its map() method to create the desired final array. The following mongo shell demonstration describes the above concept:
var result = db.owlelecmonitor.aggregate([
{
"$group": {
"_id": "$gatewayId",
"doc": {
"$push": "$$ROOT"
}
}
}
]).map(function (res){ return res.doc; });
printjson(result);
This will output to shell the desired result.
To implement this in Mongoose, use the following aggregation pipeline builder:
OwlelecMonitorModel
.aggregate()
.group({
"_id": "$gatewayId",
"doc": {
"$push": "$$ROOT"
}
})
.exec(function (err, result) {
var res = result.map(function (r){return r.doc;});
console.log(res);
});
Is there anyway within mongo, via MapReduce or Aggregation to apply a second query based on the result set of the first?, such as an Aggregate within an aggregate, or new emit/query within MapReduce.
For example, I have a materialized path pattern of items (which also includes parentId), I can get all of the roots simply by:
db.collection.find({parentId: null}
.toArray(function(err, docs) {
});
What I want to do is determine if these docs have children, just a flag true/false. I can iterate through these docs using async each and check, but on large docs, this is not very performant at all and causes event loop delays, I can use eachSeries, but this is just slow.
Ideally, I'd like to be able to handle this all within Mongo. Any suggestions if that's possible?
Edit, Example collection:
{
_id: 1,
parentId: null,
name: 'A Root Node',
path: ''
}
{
_id: 2,
parentId: 1,
name: 'Child Node A',
path: ',1'
}
{
_id: 3,
parentId: 2,
name: 'Child Node B',
path: ',1,2'
}
{
_id: 4,
parentId: null,
name: 'Another Root Node',
path: ''
}
This basically represents two root nodes, where one root node ({_id: 1}) has two children (one being direct), example:
1
2
3
4
What I would like to do is do a query based on parentId so I can get the root nodes by using null or by passing a parentId I can get the children of that and determine if the result set from this, any of the items contain children, example response for where {parentId: null}:
[{
_id: 1,
parentId: null,
name: 'A Root Node',
path '',
hasChildren: true
},
{
_id: 4,
parentId: null,
name: 'Another Root Node',
path '',
hasChildren: false
}]
You could try creating an array of the parentIds from the materialized paths that you can then use in the aggregation pipeline to project the extra field/flag hasChildren.
This can be done by using the map() method on the cursor returned from the find() method. The following illustrates this:
var arr = db.collection.find({ "parentId": { "$ne": null } })
.map(function (e){ return e.path; })
.join('')
.split(',')
.filter(function (e){ return e; })
.map(function (e){ return parseInt(e); }),
parentsIds = _.uniq(arr); /* using lodash uniq method to return a unique array */
Armed with this array of parentIds, you can then use the aggregation framework in particular the $project pipeline which makes use of the the set operator $setIsSubset which takes two arrays and returns true when the first array is a subset of the second, including when the first array equals the second array, and false otherwise:
db.collection.aggregate([
{
"$match": {
"parentId": null
}
},
{
"$project": {
"parentId": 1,
"name": 1,
"path": 1,
"hasChildren": { "$setIsSubset": [ [ "$_id" ], parentIds ] }
}
}
], function (err, res) { console.log(res); });
I have a collection links (schema below) with almost 500k entries.
{
url,
title,
owner,
stars: { users: [{ name }]},
createdAt
}
and I really do not understand why the simple aggregation projection
var projection = { $project: { _id: 1, url: 1, title: 1, createdAt: 1 } }
Link.aggregate([projection]).exec(resultCallback);
raises an
MongoError: exception: aggregation result exceeds maximum document size (16MB)
could you explain me ?
I'm using Mongoose (3.8.8) and Mongodb (2.6.0)
Not sure if the options available from MongoDB 2.6 and on-wards are fully available in the .aggregate() method implementation in mongoose. But there should be an options "hash/object" available after the pipeline argument. So basically:
var pipeline = [{ $project: { _id: 1, url: 1, title: 1, createdAt: 1 } }];
Link.aggregate(pipeline,{ cursor: true}, function(err,cursor) {
});
Or if mongoose doesn't like that for some reason then just get the raw node driver collection:
var pipeline = [{ $project: { _id: 1, url: 1, title: 1, createdAt: 1 } }];
Link.collection.aggregate(pipeline,{ cursor: true}, function(err,cursor) {
if (err)
throw err;
// Do something with the cursor which is actually more akin to a node
// stream interface with a basic .next() method and other helpers.
});
Otherwise since you output is blowing up the 16MB BSON limit then you can always output to a collection:
var pipeline = [
{ $project: { _id: 1, url: 1, title: 1, createdAt: 1 } },
{ $out: "newcollection" }
];
But since you are probably just really testing, why not just use the $limit pipeline stage until you work out the rest of your aggregation:
var pipeline = [
{ $project: { _id: 1, url: 1, title: 1, createdAt: 1 } },
{ $limit: 50 }
];
So there are a few different ways to handle things.