Use $lookup with a Conditional Join - node.js

provided I have following documents
User
{
uuid: string,
isActive: boolean,
lastLogin: datetime,
createdOn: datetime
}
Projects
{
id: string,
users: [
{
uuid: string,
otherInfo: ...
},
{... more users}
]
}
And I want to select all users that didn't login since 2 weeks and are inactive or since 5 weeks that don't have projects.
Now, the 2 weeks is working fine but I cannot seem to figure out how to do the "5 weeks and don't have projects" part
I came up with something like below but the last part does not work because $exists obviously is not a top level operator.
Anyone ever did anything like this?
Thanks!
return await this.collection
.aggregate([
{
$match: {
$and: [
{
$expr: {
$allElementsTrue: {
$map: {
input: [`$lastLogin`, `$createdOn`],
in: { $lt: [`$$this`, twoWeeksAgo] }
}
}
}
},
{
$or: [
{
isActive: false
},
{
$and: [
{
$expr: {
$allElementsTrue: {
$map: {
input: [`$lastLogin`, `$createdOn`],
in: { $lt: [`$$this`, fiveWeeksAgo] }
}
}
}
},
{
//No projects exists on this user
$exists: {
$lookup: {
from: _.get(Config, `env.collection.projects`),
let: {
currentUser: `$$ROOT`
},
pipeline: [
{
$project: {
_id: 0,
users: {
$filter: {
input: `$users`,
as: `user`,
cond: {
$eq: [`$$user.uuid`, `$currentUser.uuid`]
}
}
}
}
}
]
}
}
}
]
}
]
}
]
}
}
])
.toArray();

Not certain why you thought $expr was needed in the initial $match, but really:
const getResults = () => {
const now = Date.now();
const twoWeeksAgo = new Date(now - (1000 * 60 * 60 * 24 * 7 * 2 ));
const fiveWeeksAgo = new Date(now - (1000 * 60 * 60 * 24 * 7 * 5 ));
// as long a mongoDriverCollectionReference points to a "Collection" object
// for the "users" collection
return mongoDriverCollectionReference.aggregate([
// No $expr, since you can actually use an index. $expr cannot do that
{ "$match": {
"$or": [
// Active and "logged in"/created in the last 2 weeks
{
"isActive": true,
"$or": [
{ "lastLogin": { "$gte": twoWeeksAgo } },
{ "createdOn": { "$gte": twoWeeksAgo } }
]
},
// Also want those who...
// Not Active and "logged in"/created in the last 5 weeks
// we'll "tag" them later
{
"isActive": false,
"$or": [
{ "lastLogin": { "$gte": fiveWeeksAgo } },
{ "createdOn": { "$gte": fiveWeeksAgo } }
]
}
]
}},
// Now we do the "conditional" stuff, just to return a matching result or not
{ "$lookup": {
"from": _.get(Config, `env.collection.projects`), // there are a lot cleaner ways to register models than this
"let": {
"uuid": {
"$cond": {
"if": "$isActive", // this is boolean afterall
"then": null, // don't really want to match
"else": "$uuid" // Okay to match the 5 week results
}
}
},
"pipeline": [
// Nothing complex here as null will return nothing. Just do $in for the array
{ "$match": { "$in": [ "$$uuid", "$users.uuid" ] } },
// Don't really need the detail, so just reduce any matches to one result of [null]
{ "$group": { "_id": null } }
],
"as": "projects"
}},
// Now test if the $lookup returned something where it mattered
{ "$match": {
"$or": [
{ "active": true }, // remember we selected the active ones already
{
"projects.0": { "$exists": false } // So now we only need to know the "inactive" returned no array result.
}
]
}}
]).toArray(); // returns a Promise
};
It's pretty simple as calculated expressions via $expr are actually really bad and not what you want in a first pipeline stage. Also "not what you need" since createdOn and lastLogin really should not have been merged into an array for $allElementsTrue which would just be an AND condition, where you described logic would really mean OR. So the $or does just fine here.
So does the $or on the separation of conditions for the isActive of true/false. Again it's either "two weeks" OR "five weeks". And this certainly does not need $expr since standard inequality range matching works fine, and uses an "index".
Then you really just want to do the "conditional" things in the let for $lookup instead of your "does it exist" thinking. All you really need to know ( since the range selection of dates is actually already done ) is whether active is now true or false. Where it's active ( meaning by your logic you don't care about projects ) simply make the $$uuid used within the $match pipeline stage a null value so it will not match and the $lookup returns an empty array. Where false ( also already matching the date conditions from earlier ) then you use the actual value and "join" ( where there are projects of course ).
Then it's just a simple matter of keeping the active users, and then only testing the remaining false values for active to see if the "projects" array from the $lookup actually returned anything. If it did not, then they just don't have projects.
Probably should note here is since users is an "array" within the projects collection, you use $in for the $match condition against the single value to the array.
Note that for brevity we can use $group inside the inner pipeline to only return one result instead of possibly many matches to actual matched projects. You don't care about the content or the "count", but simply if one was returned or nothing. Again following the presented logic.
This gets you your desired results, and it does so in a manner that is efficient and actually uses indexes where available.
Also return await certainly does not do what you think it does, and in fact it's an ESLint warning message ( I suggest you enable ESLint in your project ) since it's not a smart thing to do. It does nothing really, as you would need to await getResults() ( as per the example naming ) anyway, as the await keyword is not "magic" but just a prettier way of writing then(). As well as hopefully being easier to understand, once you understand what async/await is really for syntactically that is.

Related

Check whether or not a booking is possible

I'm currently implementing a booking system. The seller can specify how many items of the given type are available. The rule is simple: There can never be more bookings then available items.
Now I would like to find out with how many existing bookings a new booking is in conflict in order to check if the limit is reached.
The following diagram should give you a little insight on what I'm trying to do.
https://ibb.co/4pJk8XV
In this example the maximum amount of concurrent bookings is 2. As you can see there are already 3 bookings. One of which has no end date specified.
Only viewing the existing bookings there are never more than 2 bookings at the time.
Now I would like to check whether a new booking is possible. I know the start date for every booking. For bookings with no specified end date, the end date will be null.
I'm trying to achieve this using Mongoose.
There is no existing code regarding this problem.
Looking at each example separately: The first one with no end date should fail, since between the 07th and 09th there would be 3 bookings at a time. The second one should be fine as there is only one existing booking on the 06th.
We will start by finding all intersections with the newDocument i'm only going to make one assumption which is that the "newBooking" endDate is typed date (if its null then we make it a very far futuristic date)
let newBookingStartDate = newBooking.startDate;
let newBookingEndDate = newBooking.endDate ? newBooking.endDate : new Date().setYear(3000);
Now for the query:
let results = await db.collection.aggregate([
{
$addFields: {
tmpEndDate: {
$cond: [
{$ne: ["$endDate", null]},
"$endDate",
newBookingEndDate
]
}
}
},
{
$match: {
$or: [
{
$and: [
{
startDate: {$lt: newBookingStartDate},
},
{
tmpEndDate: {$gt: newBookingStartDate}
},
]
},
{
$and: [
{
startDate: {$gte: newBookingStartDate},
},
{
startDate: {$lt: newBookingEndDate}
},
]
},
],
}
},
])
We match documents by diving into two cases:
document.startDate is lower than newDocument.startDate - in this case all we need to check if the document.endDate is greater then the newDocument.startDate, if it is then we have an intersection
document.startDate is greater or equal to newDocument.startDate - in this case we just need to check the document.startDate is less than the newDocument.endDate and again we'll get an intersection
Now we need to iterate over the documents we found and calculate intersections between them by running the same query:
for (let i = 0; i < results.length; i++) {
let doc = results[i];
let otherIds = results.map(val => val._id);
let docStartDate = doc.startDate;
let docEndDate = doc.endDate ? doc.endDate : new Date().setYear(3000);
let count = await db.collection.aggregate([
{
$match: {
_id: {$in: otherIds}
}
},
{
$addFields: {
tmpEndDate: {
$cond: [
{$ne: ["$endDate", null]},
"$endDate",
docEndDate
]
}
}
},
{
$match: {
$or: [
{
$and: [
{
startDate: {$lt: docStartDate},
},
{
tmpEndDate: {$gt: docStartDate}
},
]
},
{
$and: [
{
startDate: {$gte: docStartDate},
},
{
startDate: {$lt: docEndDate}
},
]
},
],
}
},
{
$count: "count"
}
])
if (count[0].count >= 3){
return false;
}
}
If any of the count results is 3 or greater (3 because i didn't remove the curr document ID from the array and it will always intersect with itself) return false as inserting a new document will set you over the threshold.

Mongoose find for array elements with limit

I'm using https://mongoosejs.com/ for querying mongo
I want to find data of array elements.Like this:
var a = ["a","b","c"];
topic.find({topic:a}).limit(4).exec(.....
If I use like this, I can find only for a element; but I need altogether. That means:
limit for a=4 limit for b=4 limit c=4
At stackoverflow you ask a question and we answer the question. Maybe one answer may be 2 may be 3 and stackoverflow send all comments of answers with limit.And i want to do this.
You could use $facet for this which will give you a single result document:
db.collection.aggregate({
$facet: {
"a": [{ $match: { "topic": "a" } }, { $limit: 4 }],
"b": [{ $match: { "topic": "b" } }, { $limit: 4 }],
"c": [{ $match: { "topic": "c" } }, { $limit: 4 }],
}
})
If you need separate documents you would probably append the following stages at the end of the above pipeline:
{
$project: {
"result": { $concatArrays: [ "$a", "$b", "$c" ] }
}
}, {
$unwind: "$result"
}
I think the solution here would be to concatenate the results.
const aIds = topic.find({topic:"a"}).limit(4).map( el => el._id );
const bIds = topic.find({topic:"b"}).limit(4).map( el => el._id );
const cIds = topic.find({topic:"c"}).limit(4).map( el => el._id );
topic.find({_id: {$in: aIds.concat(bIds).concat(cIds) }}).exec(.....
Not the most efficient, but it will work.
UPDATE
Initially, I was not aware that this question related to the mongoose.
I'm not familiar with it but the idea remains the same:
find 3 topics (a,b,c) each with its own limit then join them.

Can I put query inside lookup operator of mongoose? [duplicate]

I have a pretty simple $lookup aggregation query like the following:
{'$lookup':
{'from': 'edge',
'localField': 'gid',
'foreignField': 'to',
'as': 'from'}}
When I run this on a match with enough documents I get the following error:
Command failed with error 4568: 'Total size of documents in edge
matching { $match: { $and: [ { from: { $eq: "geneDatabase:hugo" }
}, {} ] } } exceeds maximum document size' on server
All attempts to limit the number of documents fail. allowDiskUse: true does nothing. Sending a cursor in does nothing. Adding in a $limit into the aggregation also fails.
How could this be?
Then I see the error again. Where did that $match and $and and $eq come from? Is the aggregation pipeline behind the scenes farming out the $lookup call to another aggregation, one it runs on its own that I have no ability to provide limits for or use cursors with??
What is going on here?
As stated earlier in comment, the error occurs because when performing the $lookup which by default produces a target "array" within the parent document from the results of the foreign collection, the total size of documents selected for that array causes the parent to exceed the 16MB BSON Limit.
The counter for this is to process with an $unwind which immediately follows the $lookup pipeline stage. This actually alters the behavior of $lookup in such that instead of producing an array in the parent, the results are instead a "copy" of each parent for every document matched.
Pretty much just like regular usage of $unwind, with the exception that instead of processing as a "separate" pipeline stage, the unwinding action is actually added to the $lookup pipeline operation itself. Ideally you also follow the $unwind with a $match condition, which also creates a matching argument to also be added to the $lookup. You can actually see this in the explain output for the pipeline.
The topic is actually covered (briefly) in a section of Aggregation Pipeline Optimization in the core documentation:
$lookup + $unwind Coalescence
New in version 3.2.
When a $unwind immediately follows another $lookup, and the $unwind operates on the as field of the $lookup, the optimizer can coalesce the $unwind into the $lookup stage. This avoids creating large intermediate documents.
Best demonstrated with a listing that puts the server under stress by creating "related" documents that would exceed the 16MB BSON limit. Done as briefly as possible to both break and work around the BSON Limit:
const MongoClient = require('mongodb').MongoClient;
const uri = 'mongodb://localhost/test';
function data(data) {
console.log(JSON.stringify(data, undefined, 2))
}
(async function() {
let db;
try {
db = await MongoClient.connect(uri);
console.log('Cleaning....');
// Clean data
await Promise.all(
["source","edge"].map(c => db.collection(c).remove() )
);
console.log('Inserting...')
await db.collection('edge').insertMany(
Array(1000).fill(1).map((e,i) => ({ _id: i+1, gid: 1 }))
);
await db.collection('source').insert({ _id: 1 })
console.log('Fattening up....');
await db.collection('edge').updateMany(
{},
{ $set: { data: "x".repeat(100000) } }
);
// The full pipeline. Failing test uses only the $lookup stage
let pipeline = [
{ $lookup: {
from: 'edge',
localField: '_id',
foreignField: 'gid',
as: 'results'
}},
{ $unwind: '$results' },
{ $match: { 'results._id': { $gte: 1, $lte: 5 } } },
{ $project: { 'results.data': 0 } },
{ $group: { _id: '$_id', results: { $push: '$results' } } }
];
// List and iterate each test case
let tests = [
'Failing.. Size exceeded...',
'Working.. Applied $unwind...',
'Explain output...'
];
for (let [idx, test] of Object.entries(tests)) {
console.log(test);
try {
let currpipe = (( +idx === 0 ) ? pipeline.slice(0,1) : pipeline),
options = (( +idx === tests.length-1 ) ? { explain: true } : {});
await new Promise((end,error) => {
let cursor = db.collection('source').aggregate(currpipe,options);
for ( let [key, value] of Object.entries({ error, end, data }) )
cursor.on(key,value);
});
} catch(e) {
console.error(e);
}
}
} catch(e) {
console.error(e);
} finally {
db.close();
}
})();
After inserting some initial data, the listing will attempt to run an aggregate merely consisting of $lookup which will fail with the following error:
{ MongoError: Total size of documents in edge matching pipeline { $match: { $and : [ { gid: { $eq: 1 } }, {} ] } } exceeds maximum document size
Which is basically telling you the BSON limit was exceeded on retrieval.
By contrast the next attempt adds the $unwind and $match pipeline stages
The Explain output:
{
"$lookup": {
"from": "edge",
"as": "results",
"localField": "_id",
"foreignField": "gid",
"unwinding": { // $unwind now is unwinding
"preserveNullAndEmptyArrays": false
},
"matching": { // $match now is matching
"$and": [ // and actually executed against
{ // the foreign collection
"_id": {
"$gte": 1
}
},
{
"_id": {
"$lte": 5
}
}
]
}
}
},
// $unwind and $match stages removed
{
"$project": {
"results": {
"data": false
}
}
},
{
"$group": {
"_id": "$_id",
"results": {
"$push": "$results"
}
}
}
And that result of course succeeds, because as the results are no longer being placed into the parent document then the BSON limit cannot be exceeded.
This really just happens as a result of adding $unwind only, but the $match is added for example to show that this is also added into the $lookup stage and that the overall effect is to "limit" the results returned in an effective way, since it's all done in that $lookup operation and no other results other than those matching are actually returned.
By constructing in this way you can query for "referenced data" that would exceed the BSON limit and then if you want $group the results back into an array format, once they have been effectively filtered by the "hidden query" that is actually being performed by $lookup.
MongoDB 3.6 and Above - Additional for "LEFT JOIN"
As all the content above notes, the BSON Limit is a "hard" limit that you cannot breach and this is generally why the $unwind is necessary as an interim step. There is however the limitation that the "LEFT JOIN" becomes an "INNER JOIN" by virtue of the $unwind where it cannot preserve the content. Also even preserveNulAndEmptyArrays would negate the "coalescence" and still leave the intact array, causing the same BSON Limit problem.
MongoDB 3.6 adds new syntax to $lookup that allows a "sub-pipeline" expression to be used in place of the "local" and "foreign" keys. So instead of using the "coalescence" option as demonstrated, as long as the produced array does not also breach the limit it is possible to put conditions in that pipeline which returns the array "intact", and possibly with no matches as would be indicative of a "LEFT JOIN".
The new expression would then be:
{ "$lookup": {
"from": "edge",
"let": { "gid": "$gid" },
"pipeline": [
{ "$match": {
"_id": { "$gte": 1, "$lte": 5 },
"$expr": { "$eq": [ "$$gid", "$to" ] }
}}
],
"as": "from"
}}
In fact this would be basically what MongoDB is doing "under the covers" with the previous syntax since 3.6 uses $expr "internally" in order to construct the statement. The difference of course is there is no "unwinding" option present in how the $lookup actually gets executed.
If no documents are actually produced as a result of the "pipeline" expression, then the target array within the master document will in fact be empty, just as a "LEFT JOIN" actually does and would be the normal behavior of $lookup without any other options.
However the output array to MUST NOT cause the document where it is being created to exceed the BSON Limit. So it really is up to you to ensure that any "matching" content by the conditions stays under this limit or the same error will persist, unless of course you actually use $unwind to effect the "INNER JOIN".
I had same issue with fllowing Node.js query becuase 'redemptions' collection has more then 400,000 of data. I am using Mongo DB server 4.2 and Node JS driver 3.5.3.
db.collection('businesses').aggregate(
{
$lookup: { from: 'redemptions', localField: "_id", foreignField: "business._id", as: "redemptions" }
},
{
$project: {
_id: 1,
name: 1,
email: 1,
"totalredemptions" : {$size:"$redemptions"}
}
}
I have modified query as below to make it work super fast.
db.collection('businesses').aggregate(query,
{
$lookup:
{
from: 'redemptions',
let: { "businessId": "$_id" },
pipeline: [
{ $match: { $expr: { $eq: ["$business._id", "$$businessId"] } } },
{ $group: { _id: "$_id", totalCount: { $sum: 1 } } },
{ $project: { "_id": 0, "totalCount": 1 } }
],
as: "redemptions"
},
{
$project: {
_id: 1,
name: 1,
email: 1,
"totalredemptions" : {$size:"$redemptions"}
}
}
}

Aggregation Timing Out

I am using aggregates to query my schema for counts over date ranges, my problem is i am not getting any response from the server (Times out everytime), other mongoose queries are working fine (find, save, etc.) and when i call aggregates it depends on the pipeline (when i only use match i get a response when i add unwind i don't get any).
Connection Code:
var promise = mongoose.connect('mongodb://<username>:<password>#<db>.mlab.com:<port>/<db-name>', {
useMongoClient: true,
replset: {
ha: true, // Make sure the high availability checks are on
haInterval: 5000 // Run every 5 seconds
}
});
promise.then(function(db){
console.log('DB Connected');
}).catch(function(e){
console.log('DB Not Connected');
console.errors(e.message);
process.exit(1);
});
Schema:
var ProspectSchema = new Schema({
contact_name: {
type: String,
required: true
},
company_name: {
type: String,
required: true
},
contact_info: {
type: Array,
required: true
},
description:{
type: String,
required: true
},
product:{
type: Schema.Types.ObjectId, ref: 'Product'
},
progression:{
type: String
},
creator:{
type: String
},
sales: {
type: Schema.Types.ObjectId,
ref: 'User'
},
technical_sales: {
type: Schema.Types.ObjectId,
ref: 'User'
},
actions: [{
type: {type: String},
description: {type: String},
date: {type: Date}
}],
sales_connect_id: {
type: String
},
date_created: {
type: Date,
default: Date.now
}
});
Aggregation code:
exports.getActionsIn = function(start_date, end_date) {
var start = new Date(start_date);
var end = new Date(end_date);
return Prospect.aggregate([
{
$match: {
// "actions": {
// $elemMatch: {
// "type": {
// "$exists": true
// }
// }
// }
"actions.date": {
$gte: start,
$lte: end
}
}
}
,{
$project: {
_id: 0,
actions: 1
}
}
,{
$unwind: "actions"
}
,{
$group: {
_id: "actions.date",
count: {
$sum: 1
}
}
}
// ,{
// $project: {
// _id: 0,
// date: {
// $dateToString: {
// format: "%d/%m/%Y",
// date: "actions.date"
// }
// }
// // ,
// // count : "$count"
// }
// }
]).exec();
}
Calling the Aggregation:
router.get('/test',function(req, res, next){
var start_date = req.query.start_date;
var end_date = req.query.end_date;
ProspectCont.getActionsIn(start_date,end_date).then(function(value, err){
if(err)console.log(err);
res.json(value);
});
})
My Main Problem is that i get no response at all, i can work with an error message the issue is i am not getting any so i don't know what is wrong.
Mongoose Version: 4.11.8
P.s. I tried multiple variations of the aggregation pipeline, so this isn't my first try, i have an aggregation working on the main prospects schema but not the actions sub-document
You have several problems here, mostly by missing concepts. Lazy readers can skip to the bottom for the full pipeline example, but the main body here is in the explanation of why things are done as they are.
You are trying to select on a date range. The very first thing to check on any long running operation is that you have a valid index. You might have one, or you might not. But you should issue: ( from the shell )
db.prospects.createIndex({ "actions.date": 1 })
Just to be sure. You probably really should add this to the schema definition so you know this should be deployed. So add to your defined schema:
ProspectSchema.index({ "actions.date": 1 })
When querying with a "range" on elements of an array, you need to understand that those are "multiple conditions" which you are expecting to match elements "between". Whilst you generally can get away with querying a "single property" of an array using "Dot Notation", you are missing that the application of [$gte][1] and $lte is like specifying the property several times with $and explicitly.
Whenever you have such "multiple conditions" you always mean to use $elemMatch. Without it, you are simply testing every value in the array to see if it is greater than or less than ( being some may be greater and some may be lesser ). The $elemMatch operator makes sure that "both" are applied to the same "element", and not just all array values as "Dot notation" exposes them:
{ "$match": {
"actions": {
"$elemMatch": { "date": { "$gte": start, "$lte: end } }
}
}}
That will now only match documents where the "array elements" fall between the specified date. Without it, you are selecting and processing a lot more data which is irrelevant to the selection.
Array Filtering: Marked in Bold because it's prominence cannot be ignored. Any initial $match works just like any "query" in that it's "job" is to "select documents" valid to the expression. This however does not have any effect on the contents of the array in the documents returned.
Whenever you have such a condition for document selection, you nearly always intend to "filter" such content from the array itself. This is a separate process, and really should be performed before any other operations that work with the content. Especially [$unwind][4].
So you really should add a $filter in either an $addFields or $project as is appropriate to your intend "immediately" following any document selection:
{ "$project": {
"_id": 0,
"actions": {
"$filter": {
"input": "$actions",
"as": "a",
"in": {
"$and": [
{ "$gte": [ "$$a.date", start ] },
{ "$lte": [ "$$a.date", end ] }
]
}
}
}
}}
Now the array content, which you already know "must" have contained at least one valid item due to the initial query conditions, is "reduced" down to only those entries that actually match the date range that you want. This removes a lot of overhead from later processing.
Note the different "logical variants" of $gte and $lte in use within the $filter condition. These evaluate to return a boolean for expressions that require them.
Grouping It's probably just as an attempt at getting a result, but the code you have does not really do anything with the dates in question. Since typical date values should be provided with millisecond precision, you general want to reduce them.
Commented code suggests usage of $dateToString within a $project. It is strongly recommended that you do not do that. If you intend such a reduction, then supply that expression directly to the grouping key within $group instead:
{ "$group": {
"_id": {
"$dateToString": {
"format": "%Y-%m-%d",
"date": "$actions.date"
}
},
"count": { "$sum": 1 }
}}
I personally don't like returning a "string" when a natural Date object serializes properly for me already. So I like to use the "math" approach to "round" dates instead:
{ "$group": {
"_id": {
"$add": [
{ "$subtract": [
{ "$subtract": [ "$actions.date", new Date(0) ] },
{ "$mod": [
{ "$subtract": [ "$actions.date", new Date(0) ] },
1000 * 60 * 60 * 24
]}
],
new Date(0)
]
},
"count": { "$sum": 1 }
}}
That returns a valid Date object "rounded" to the current day. Mileage may vary on preferred approaches, but it's the one I like. And it takes the least bytes to transfer.
The usage of Date(0) represents the "epoch date". So when you $subtract one BSON Date from another you end up with the milliseconds difference between the two as an integer. When $add an integer value to a BSON Date, you get a new BSON Date representing the sum of the milliseconds value between the two. This is the basis of converting to numeric, rounding to the nearest start of day, and then converting numeric back to a Date value.
By making that statement directly within the $group rather than $project, you are basically saving what actually gets interpreted as "go through all the data and return this calculated value, then go and do...". Much the same as working through a pile of objects, marking them with a pen first and then actually counting them as a separate step.
As a single pipeline stage it saves considerable resources as you do the accumulation at the same time as calculating the value to accumulate on. When you think it though much like the provided analogy, it just makes a lot of sense.
As a full pipeline example you would put the above together as:
Prospect.aggregate([
{ "$match": {
"actions": {
"$elemMatch": { "date": { "$gte": start, "$lte: end } }
}
}},
{ "$project": {
"_id": 0,
"actions": {
"$filter": {
"input": "$actions",
"as": "a",
"in": {
"$and": [
{ "$gte": [ "$$a.date", start ] },
{ "$lte": [ "$$a.date", end ] }
]
}
}
}
}},
{ "$unwind": "$actions" },
{ "$group": {
"_id": {
"$dateToString": {
"format": "%Y-%m-%d",
"date": "$actions.date"
}
},
"count": { "$sum": 1 }
}}
])
And honestly if after making sure an index is in place, and following that pipeline you still have timeout problems, then reduce the date selection down until you get a reasonable response time.
If it's still taking too long ( or the date reduction is not reasonable ) then your hardware simply is not up to the task. If you really have a lot of data then you have to be reasonable with expectations. So scale up or scale out, but those things are outside the scope of any question here.
As it stands those improvements should make a significant difference over any attempt shown so far. Mostly due to a few fundamental concepts that are being missed.

how to combine array of object result in mongodb

how can i combine match document's subdocument together as one and return it as an array of object ? i have tried $group but don't seem to work.
my query ( this return array of object in this case there are two )
User.find({
'business_details.business_location': {
$near: coords,
$maxDistance: maxDistance
},
'deal_details.deals_expired_date': {
$gte: new Date()
}
}, {
'deal_details': 1
}).limit(limit).exec(function(err, locations) {
if (err) {
return res.status(500).json(err)
}
console.log(locations)
the console.log(locations) result
// give me the result below
[{
_id: 55 c0b8c62fd875a93c8ff7ea, // first document
deal_details: [{
deals_location: '101.6833,3.1333',
deals_price: 12.12 // 1st deal
}, {
deals_location: '101.6833,3.1333',
deals_price: 34.3 // 2nd deal
}],
business_details: {}
}, {
_id: 55 a79898e0268bc40e62cd3a, // second document
deal_details: [{
deals_location: '101.6833,3.1333',
deals_price: 12.12 // 3rd deal
}, {
deals_location: '101.6833,3.1333',
deals_price: 34.78 // 4th deal
}, {
deals_location: '101.6833,3.1333',
deals_price: 34.32 // 5th deal
}],
business_details: {}
}]
what i wanted to do is to combine these both deal_details field together and return it as an array of object. It will contain 5 deals in one array of object instead of two separated array of objects.
i have try to do it in my backend (nodejs) by using concat or push, however when there's more than 2 match document i'm having problem to concat them together, is there any way to combine all match documents and return it as one ? like what i mentioned above ?
What you are probably missing here is the $unwind pipeline stage, which is what you typically use to "de-normalize" array content, particularly when your grouping operation intends to work across documents in your query result:
User.aggregate(
[
// Your basic query conditions
{ "$match": {
"business_details.business_location": {
"$near": coords,
"$maxDistance": maxDistance
},
"deal_details.deals_expired_date": {
"$gte": new Date()
}},
// Limit query results here
{ "$limit": limit },
// Unwind the array
{ "$unwind": "$deal_details" },
// Group on the common location
{ "$group": {
"_id": "$deal_details.deals_location",
"prices": {
"$push": "$deal_details.deals_price"
}
}}
],
function(err,results) {
if (err) throw err;
console.log(JSON.stringify(results,undefined,2));
}
);
Which gives output like:
{
"_id": "101.6833,3.1333",
"prices": [
12.12,
34.3,
12.12,
34.78,
34.32
]
}
Depending on how many documents actually match the grouping.
Alternately, you might want to look at the $geoNear pipeline stage, which gives a bit more control, especially when dealing with content in arrays.
Also beware that with "location" data in an array, only the "nearest" result is being considered here and not "all" of the array content. So other items in the array may not be actually "near" the queried point. That is more of a design consideration though as any query operation you do will need to consider this.
You can merge them with reduce:
locations = locations.reduce(function(prev, location){
previous = prev.concat(location.deal_details)
return previous
},[])

Resources