Mongoose/Mongodb previous and next in embedded document - node.js

I'm learning Mongodb/Mongoose/Express and have come across a fairly complex query (relative to my current level of understanding anyway) that I'm not sure how best to approach. I have a collection - to keep it simple let's call it entities - with an embedded actions array:
name: String
actions: [{
name: String
date: Date
}]
What I'd like to do is to return an array of documents with each containing the most recent action (or most recent to a specified date), and the next action (based on the same date).
Would this be possible with one find() query, or would I need to break this down into multiple queries and merge the results to generate one result array? I'm looking for the most efficient route possible.

Provided that your "actions" are inserted with the "most recent" being the last entry in the list, and usually this will be the case unless you are specifically updating items and changing dates, then all you really want to do is "project" the last item of the array. This is what the $slice projection operation is for:
Model.find({},{ "actions": { "$slice": -1 } },function(err,docs) {
// contains an array with the last item
});
If indeed you are "updating" array items and changing dates, but you want to query for the most recent on a regular basis, then you are probably best off keeping the array ordered. You can do this with a few modifiers such as:
Model.update(
{
"_id": ObjectId("541f7bbb699e6dd5a7caf2d6"),
},
{
"$push": { "actions": { "$each": [], "$sort": { "date": 1 } } }
},
function(err,numAffected) {
}
);
Which is actually more of a trick that you can do with the $sort modifier to simply sort the existing array elements without adding or removing. In versions prior to 2.6 you need the $slice "update" modifier in here as well, but this could be set to a value larger than the expected array elements if you did not actually want to restrict the possible size, but that is probably a good idea.
Unfortunately, if you were "updating" via a $set statement, then you cannot do this "sorting" in a single update statement, as MongoDB will not allow both types of operations on the array at once. But if you can live with that, then this is a way to keep the array ordered so the first query form works.
If it just seems too hard to keep an array ordered by date, then you can in fact retrieve the largest value my means of the .aggregate() method. This allows greater manipulation of the documents than is available to basic queries, at a little more cost:
Model.aggregate([
// Unwind the array to de-normalize as documents
{ "$unwind": "$actions" },
// Sort the contents per document _id and inner date
{ "$sort": { "_id": 1, "actions.date": 1 } },
// Group back with the "last" element only
{ "$group": {
"_id": "$_id",
"name": { "$last": "$name" },
"actions": { "$last": "$actions" }
}}
],
function(err,docs) {
})
And that will "pull apart" the array using the $unwind operator, then process with a next stage to $sort the contents by "date". In the $group pipeline stage the "_id" means to use the original document key to "collect" on, and the $last operator picks the field values from the "last" document ( de-normalized ) on that grouping boundary.
So there are various things that you can do, but of course the best way is to keep your array ordered and use the basic projection operators to simply get the last item in the list.

Related

How can I optimize this query in mongo db?

Here is the query:
const tags = await mongo
.collection("positive")
.aggregate<{ word: string; count: number }>([
{
$lookup: {
from: "search_history",
localField: "search_id",
foreignField: "search_id",
as: "history",
pipeline: [
{
$match: {
created_at: { $gt: prevSunday.toISOString() },
},
},
{
$group: {
_id: "$url",
},
},
],
},
},
{
$match: {
history: { $ne: [] },
},
},
{
$group: {
_id: "$word",
url: {
$addToSet: "$history._id",
},
},
},
{
$project: {
_id: 0,
word: "$_id",
count: {
$size: {
$reduce: {
input: "$url",
initialValue: [],
in: {
$concatArrays: ["$$value", "$$this"],
},
},
},
},
},
},
{
$sort: {
count: -1,
},
},
{
$limit: 50,
},
])
.toArray();
I think I need an index but not sure how or where to add.
Perhaps performance of this operation should be revisited after we confirm that it is satisfying the desired application logic that the approach itself is reasonable.
When it comes to performance, there is nothing that can be done to improve efficiency on the positive collection if the intention is to process every document. By definition, processing all documents requires a full collection scan.
To efficiently support the $lookup on the search_history collection, you may wish to confirm that an index on { search_id: 1, created_at: 1, url: 1 } exists. Providing the .explain("allPlansExecution") output would allow us to better understand the current performance characteristics.
Desired Logic
Updating the question to include details about the schemas and the purpose of the aggregation would be very helpful with respect to understanding the overall situation. Just looking at the aggregation, it appears to be doing the following:
For every single document in the positive collection, add a new field called history.
This new field is a list of url values from the search_history collection where the corresponding document has a matching search_id value and was created_at after last Sunday.
The aggregation then filters to only keep documents where the new history field has at least one entry.
The next stage then groups the results together by word. The $addToSet operator is used here, but it may be generating an array of arrays rather than de-duplicated urls.
The final 3 stages of the aggregation seem to be focused on calculating the number of urls and returning the top 50 results by word sorted on that size in descending order.
Is this what you want? In particular the following aspects may be worth confirming:
Is it your intention to process every document in the positive collection? This may be the case, but it's impossible to tell without any schema/use-case context.
Is the size calculation of the urls correct? It seems like you may need to use a $map when doing the $addToSet for the $group instead of using $reduce for the subsequent $project.
The best thing to do is to limit the number of documents passed to each stage.
Indexes are used by mongo in aggregations only in the first stage only if it's a match, using 1 index max.
So the best thing to do is to have a match on an indexed field that is very restrictive.
Moreover, please note that $limit, $skip and $sample are not panaceas because they still scan the entire collection.
A way to efficiently limit the number of documents selected on the first stage is to use a "pagination". You can make it work like this :
Once every X requests
Count the number of docs in the collection
Divide this in chunks of Yk max
Find the _ids of the docs at the place Y, 2Y, 3Y etc with skip and limit
Cache the results in redis/memcache (or as global variable if you really cannot do otherwise)
Every request
Get the current chunk to scan by reading the redis keys used and nbChunks
Get the _ids cached in redis used to delimit the next aggregation id:${used%nbChunks} and id:${(used%nbChunks)+1} respectively
Aggregate using $match with _id:{$gte: ObjectId(id0), $lt: ObjectId(id1)}) }
Increment used, if used > X then update chunks
Further optimisation
If using redis, supplement every key with ${cluster.worker.id}:to avoid hot keys.
Notes
The step 3) of the setup of chunks can be a really long and intensive process, so do it only when necessary, let's say every X~1k requests.
If you are scanning the last chunk, do not put the $lt
Once this process implemented, your job is to find the sweet spot of X and Y that suits your needs, constrained by a Y being large enough to retrieve max documents while being not too long and a X that keeps the chunks roughly equals as the collection has more and more documents.
This process is a bit long to implement but once it is, time complexity is ~O(Y) and not ~O(N). Indeed, the $match being the first stage and _id being a field that is indexed, this first stage is really fast and limits to max Y documents scanned.
Hope it help =) Make sure to ask more if needed =)

Mongoose - update last element in array (update document with an aggregator)

My question builds off this: Get only the last element of array mongoose. However, I'm using Mongoose and want to also update a field in the last element of the array, not just retrieve it.
How can I modify the last element/object in the rounds array to change from status: started to status: terminated?
Document to modify
{
"_id" : "8844d3f2d25f45df8105db1ab058d7d6",
"rounds" : [
{ "status" : "offline" },
{ "status" : "paused" },
{ "status" : "started" }
],
"updatedAt" : ISODate("2020-07-05T21:21:58.823Z")
}
Seems like using negative indices would make this easy, allowing the n-1 element to be modified. I've seen many questions mention aggregation, but none modify the underlying data. I want to combine aggregation with updating a documenting.
Is it possible to use $set with a negative index?
As of v4.2 MongoDB does not have a straightforward way to update an array element by index.
However starting from v4.2 you can use updates with aggregation pipeline, which allow you to construct a new array based on the current array and replace the current array.
Instead of modifying the last element, you can construct a new array consisting of the first top n - 1 elements combining with a new element { status: "terminated" }.
You can achieve this by using $slice to get the top n - 1 elements and combine with the new element using $concatArrays
This assumes the elements only contains the field status you you want to update a field of an array elements that contains multiple fields, you'll need to merge the updated field with the current element using $mergeObjects
Model.updateOne({ // or updateMany with your condition
_id: "8844d3f2d25f45df8105db1ab058d7d6"
}, [{
$set: {
rounds: {
$concatArrays: [ // combine arrays
{
$slice: [ // get the top n - 1 elements
"$rounds",
{ $subtract: [{ $size: "$rounds" }, 1] }
]
},,
[{ status: "terminated" }] // a new element to replace the last element
]
}
}
}])

MongoDB nested array update multiple documents [duplicate]

I am trying to update a value in the nested array but can't get it to work.
My object is like this
{
"_id": {
"$oid": "1"
},
"array1": [
{
"_id": "12",
"array2": [
{
"_id": "123",
"answeredBy": [], // need to push "success"
},
{
"_id": "124",
"answeredBy": [],
}
],
}
]
}
I need to push a value to "answeredBy" array.
In the below example, I tried pushing "success" string to the "answeredBy" array of the "123 _id" object but it does not work.
callback = function(err,value){
if(err){
res.send(err);
}else{
res.send(value);
}
};
conditions = {
"_id": 1,
"array1._id": 12,
"array2._id": 123
};
updates = {
$push: {
"array2.$.answeredBy": "success"
}
};
options = {
upsert: true
};
Model.update(conditions, updates, options, callback);
I found this link, but its answer only says I should use object like structure instead of array's. This cannot be applied in my situation. I really need my object to be nested in arrays
It would be great if you can help me out here. I've been spending hours to figure this out.
Thank you in advance!
General Scope and Explanation
There are a few things wrong with what you are doing here. Firstly your query conditions. You are referring to several _id values where you should not need to, and at least one of which is not on the top level.
In order to get into a "nested" value and also presuming that _id value is unique and would not appear in any other document, you query form should be like this:
Model.update(
{ "array1.array2._id": "123" },
{ "$push": { "array1.0.array2.$.answeredBy": "success" } },
function(err,numAffected) {
// something with the result in here
}
);
Now that would actually work, but really it is only a fluke that it does as there are very good reasons why it should not work for you.
The important reading is in the official documentation for the positional $ operator under the subject of "Nested Arrays". What this says is:
The positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays, because the replacement for the $ placeholder is a single value
Specifically what that means is the element that will be matched and returned in the positional placeholder is the value of the index from the first matching array. This means in your case the matching index on the "top" level array.
So if you look at the query notation as shown, we have "hardcoded" the first ( or 0 index ) position in the top level array, and it just so happens that the matching element within "array2" is also the zero index entry.
To demonstrate this you can change the matching _id value to "124" and the result will $push an new entry onto the element with _id "123" as they are both in the zero index entry of "array1" and that is the value returned to the placeholder.
So that is the general problem with nesting arrays. You could remove one of the levels and you would still be able to $push to the correct element in your "top" array, but there would still be multiple levels.
Try to avoid nesting arrays as you will run into update problems as is shown.
The general case is to "flatten" the things you "think" are "levels" and actually make theses "attributes" on the final detail items. For example, the "flattened" form of the structure in the question should be something like:
{
"answers": [
{ "by": "success", "type2": "123", "type1": "12" }
]
}
Or even when accepting the inner array is $push only, and never updated:
{
"array": [
{ "type1": "12", "type2": "123", "answeredBy": ["success"] },
{ "type1": "12", "type2": "124", "answeredBy": [] }
]
}
Which both lend themselves to atomic updates within the scope of the positional $ operator
MongoDB 3.6 and Above
From MongoDB 3.6 there are new features available to work with nested arrays. This uses the positional filtered $[<identifier>] syntax in order to match the specific elements and apply different conditions through arrayFilters in the update statement:
Model.update(
{
"_id": 1,
"array1": {
"$elemMatch": {
"_id": "12","array2._id": "123"
}
}
},
{
"$push": { "array1.$[outer].array2.$[inner].answeredBy": "success" }
},
{
"arrayFilters": [{ "outer._id": "12" },{ "inner._id": "123" }]
}
)
The "arrayFilters" as passed to the options for .update() or even
.updateOne(), .updateMany(), .findOneAndUpdate() or .bulkWrite() method specifies the conditions to match on the identifier given in the update statement. Any elements that match the condition given will be updated.
Because the structure is "nested", we actually use "multiple filters" as is specified with an "array" of filter definitions as shown. The marked "identifier" is used in matching against the positional filtered $[<identifier>] syntax actually used in the update block of the statement. In this case inner and outer are the identifiers used for each condition as specified with the nested chain.
This new expansion makes the update of nested array content possible, but it does not really help with the practicality of "querying" such data, so the same caveats apply as explained earlier.
You typically really "mean" to express as "attributes", even if your brain initially thinks "nesting", it's just usually a reaction to how you believe the "previous relational parts" come together. In reality you really need more denormalization.
Also see How to Update Multiple Array Elements in mongodb, since these new update operators actually match and update "multiple array elements" rather than just the first, which has been the previous action of positional updates.
NOTE Somewhat ironically, since this is specified in the "options" argument for .update() and like methods, the syntax is generally compatible with all recent release driver versions.
However this is not true of the mongo shell, since the way the method is implemented there ( "ironically for backward compatibility" ) the arrayFilters argument is not recognized and removed by an internal method that parses the options in order to deliver "backward compatibility" with prior MongoDB server versions and a "legacy" .update() API call syntax.
So if you want to use the command in the mongo shell or other "shell based" products ( notably Robo 3T ) you need a latest version from either the development branch or production release as of 3.6 or greater.
See also positional all $[] which also updates "multiple array elements" but without applying to specified conditions and applies to all elements in the array where that is the desired action.
I know this is a very old question, but I just struggled with this problem myself, and found, what I believe to be, a better answer.
A way to solve this problem is to use Sub-Documents. This is done by nesting schemas within your schemas
MainSchema = new mongoose.Schema({
array1: [Array1Schema]
})
Array1Schema = new mongoose.Schema({
array2: [Array2Schema]
})
Array2Schema = new mongoose.Schema({
answeredBy": [...]
})
This way the object will look like the one you show, but now each array are filled with sub-documents. This makes it possible to dot your way into the sub-document you want. Instead of using a .update you then use a .find or .findOne to get the document you want to update.
Main.findOne((
{
_id: 1
}
)
.exec(
function(err, result){
result.array1.id(12).array2.id(123).answeredBy.push('success')
result.save(function(err){
console.log(result)
});
}
)
Haven't used the .push() function this way myself, so the syntax might not be right, but I have used both .set() and .remove(), and both works perfectly fine.

Upsert and $inc Sub-document in Array

The following schema is intended to record total views and views for a very specific day only.
const usersSchema = new Schema({
totalProductsViews: {type: Number, default: 0},
productsViewsStatistics: [{
day: {type: String, default: new Date().toISOString().slice(0, 10), unique: true},
count: {type: Number, default: 0}
}],
});
So today views will be stored in another subdocument different from yesterday. To implement this I tried to use upsert so as subdocument will be created each day when product is viewed and counts will be incremented and recorded based on a particular day. I tried to use the following function but seems not to work the way I intended.
usersSchema.statics.increaseProductsViews = async function (id) {
//Based on day only.
const todayDate = new Date().toISOString().slice(0, 10);
const result = await this.findByIdAndUpdate(id, {
$inc: {
totalProductsViews: 1,
'productsViewsStatistics.$[sub].count': 1
},
},
{
upsert: true,
arrayFilters: [{'sub.day': todayDate}],
new: true
});
console.log(result);
return result;
};
What do I miss to get the functionality I want? Any help will be appreciated.
What you are trying to do here actually requires you to understand some concepts you may not have grasped yet. The two primary ones being:
You cannot use any positional update as part of an upsert since it requires data to be present
Adding items into arrays mixed with "upsert" is generally a problem that you cannot do in a single statement.
It's a little unclear if "upsert" is your actual intention anyway or if you just presumed that was what you had to add in order to get your statement to work. It does complicate things if that is your intent, even if it's unlikely give the finByIdAndUpdate() usage which would imply you were actually expecting the "document" to be always present.
At any rate, it's clear you actually expect to "Update the array element when found, OR insert a new array element where not found". This is actually a two write process, and three when you consider the "upsert" case as well.
For this, you actually need to invoke the statements via bulkWrite():
usersSchema.statics.increaseProductsViews = async function (_id) {
//Based on day only.
const todayDate = new Date().toISOString().slice(0, 10);
await this.bulkWrite([
// Try to match an existing element and update it ( do NOT upsert )
{
"updateOne": {
"filter": { _id, "productViewStatistics.day": todayDate },
"update": {
"$inc": {
"totalProductsViews": 1,
"productViewStatistics.$.count": 1
}
}
}
},
// Try to $push where the element is not there but document is - ( do NOT upsert )
{
"updateOne": {
"filter": { _id, "productViewStatistics.day": { "$ne": todayDate } },
"update": {
"$inc": { "totalProductViews": 1 },
"$push": { "productViewStatistics": { "day": todayDate, "count": 1 } }
}
}
},
// Finally attempt upsert where the "document" was not there at all,
// only if you actually mean it - so optional
{
"updateOne": {
"filter": { _id },
"update": {
"$setOnInsert": {
"totalProductViews": 1,
"productViewStatistics": [{ "day": todayDate, "count": 1 }]
}
}
}
])
// return the modified document if you really must
return this.findById(_id); // Not atomic, but the lesser of all evils
}
So there's a real good reason here why the positional filtered [<identifier>] operator does not apply here. The main good reason is the intended purpose is to update multiple matching array elements, and you only ever want to update one. This actually has a specific operator in the positional $ operator which does exactly that. It's condition however must be included within the query predicate ( "filter" property in UpdateOne statements ) just as demonstrated in the first two statements of the bulkWrite() above.
So the main problems with using positional filtered [<identifier>] are that just as the first two statements show, you cannot actually alternate between the $inc or $push as would depend on if the document actually contained an array entry for the day. All that will happen is at best no update will be applied when the current day is not matched by the expression in arrayFilters.
The at worst case is an actual "upsert" will throw an error due to MongoDB not being able to decipher the "path name" from the statement, and of course you simply cannot $inc something that does not exist as a "new" array element. That needs a $push.
That leaves you with the mechanic that you also cannot do both the $inc and $push within a single statement. MongoDB will error that you are attempting to "modify the same path" as an illegal operation. Much the same applies to $setOnInsert since whilst that operator only applies to "upsert" operations, it does not preclude the other operations from happening.
Thus the logical steps fall back to what the comments in the code also describe:
Attempt to match where the document contains an existing array element, then update that element. Using $inc in this case
Attempt to match where the document exists but the array element is not present and then $push a new element for the given day with the default count, updating other elements appropriately
IF you actually did intend to upsert documents ( not array elements, because that's the above steps ) then finally actually attempt an upsert creating new properties including a new array.
Finally there is the issue of the bulkWrite(). Whilst this is a single request to the server with a single response, it still is effectively three ( or two if that's all you need ) operations. There is no way around that and it is better than issuing chained separate requests using findByIdAndUpdate() or even updateOne().
Of course the main operational difference from the perspective of code you attempted to implement is that method does not return the modified document. There is no way to get a "document response" from any "Bulk" operation at all.
As such the actual "bulk" process will only ever modify a document with one of the three statements submitted based on the presented logic and most importantly the order of those statements, which is important. But if you actually wanted to "return the document" after modification then the only way to do that is with a separate request to fetch the document.
The only caveat here is that there is the small possibility that other modifications could have occurred to the document other than the "array upsert" since the read and update are separated. There really is no way around that, without possibly "chaining" three separate requests to the server and then deciding which "response document" actually applied the update you wanted to achieve.
So with that context it's generally considered the lesser of evils to do the read separately. It's not ideal, but it's the best option available from a bad bunch.
As a final note, I would strongly suggest actually storing the the day property as a BSON Date instead of as a string. It actually takes less bytes to store and is far more useful in that form. As such the following constructor is probably the clearest and least hacky:
const todayDate = new Date(new Date().setUTCHours(0,0,0,0))

Near Geometry with a Join

i'm truing to fetch result from my mongodb server, query: get cars that in nearest agency
this what i have tried but getting result without sorting
let cars = await Cars.find({disponible: true})
.populate({
path: 'agency',
match: {
"location": {
$near: {
$geometry: {
coordinates: [ latitude , longitude ]
},
}
}
},
select: 'name'
})
.select('name agency');
// send result via api
res.status(200).json({cars})
my schemas
//Car Schema
const carSchema = new Schema({
name: { type: String, required: true},
agency: {type: Schema.Types.ObjectId, ref: 'agencies'},
}, { timestamps: true });
//Agency Schema
const agencySchema = new Schema({
name: { type: String, required: true},
location: {
type: {
type: String,
enum: ['Point'],
default: 'Point'
},
coordinates: {
type: [Number],
required: true
}
},
}, { timestamps: true });
i want to get cars with agency but sorted by the nearest agency
Theres a reason populate() cannot work
Using populate() you won't be able to do this, and for a number of reasons. The main reason being that all populate() is doing is essentially marrying up your foreign reference to results from another collection with given query parameters.
In fact with a $near query, the results could be quite weird, since you might not receive enough "near" results to actually marry up with all the parent references.
There's a bit more detail about the "foreign constraint" limitation with populate() in existing answers to Querying after populate in Mongoose and of course on the modern solution to this, which is $lookup.
Using $lookup and $geoNear
In fact, what you need is a $lookup along with a $geoNear, but you also must do the "join" the other way around to what you might expect. And thus from the Agency model you would do:
Agency.aggregate([
// First find "near" agencies, and project a distance field
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": [ longitude , latitude ]
},
"distanceField": "distance",
"spherical" true
}},
// Then marry these up to Cars - which can be many
{ "$lookup": {
"from": Car.collection.name,
"let": { "agencyId": "$_id" },
"pipeline": [
{ "$match": {
"disponible": true,
"$expr": { "$eq": [ "$$agencyId", "$agency" ] }
}}
],
"as": "cars"
}},
// Unwinding denormalizes that "many"
{ "$unwind": "$cars" },
// Group is "inverting" the result
{ "$group": {
"_id": "$cars._id",
"car": { "$first": "$cars" },
"agency": {
"$first": {
"$arrayToObject": {
"$filter": {
"input": { "$objectToArray": "$$ROOT" },
"cond": { "$ne": [ "$$this.k", "cars" ] }
}
}
}
}
}},
// Sort by distance, nearest is least
{ "$sort": { "agency.distance": 1 } },
// Reformat to expected output
{ "$replaceRoot": {
"newRoot": {
"$mergeObjects": [ "$car", { "agency": "$agency" } ]
}
}}
])
As stated the $geoNear part must come first. Bottom line is it basically needs to be the very first stage in an aggregation pipeline in order to use the mandatory index for such a query. Though it is true that given the form of $lookup shown here you "could" actually use a $near expression within the $lookup pipeline with a starting $match stage, it won't return what you expect since basically the constraint is already on the matching _id value. And it's really just the same problem populate() has in that regard.
And of course though $geoNear has a "query" constraint, you cannot use $expr within that option so this rules out that stage being used inside the $lookup pipeline again. And yes, still basically the same problem of conflicting constraints.
So this means you $geoNear from your Agency model instead. This pipeline stage has the additional thing it does which is it actually projects a "distanceField" into the result documents. So a new field within the documents ( called "distance" in the example ) will then indicate how far away from the queried point the matched document is. This is important for sorting later.
Of course you want this "joined" to the Car, so you want to do a $lookup. Note that since MongoDB has no knowledge of mongoose models the $lookup pipeline stage expects the "from" to be the actual collection name on the server. Mongoose models typically abstract this detail away from you ( though it's normally the plural of the model name, in lowercase ), but you can always access this from the .collection.name property on the model as shown.
The other arguments are the "let" in which you keep a reference to the _id of the current Agency document. This is used within the $expr of the $match in order to compare the local and foreign keys for the actual "joining" condition. The other constraints in the $match further filter down the matching "cars" to those criteria as well.
Now it's probably likely there are in fact many cars to each agency and that is one basic reason the model has been done like this in separate collections. Regardless of whether it's one to one or one to many, the $lookup result always produces an array. Basically we now want this array to "denormalize" and essentially "copy" the Agency detail for each found Car. This is where $unwind comes in. An added benefit is that when you $unwind the array of matching "cars", any empty array where the contraints did not match anything effectively removes the Agency from the possible results altogether.
Of course this is the the wrong way around from how you actually want the results, as it's really just "one car" with "one agency". This is where $group comes in and collects information "per car". Since this way around it is expected as "one to one", the $first operator is used as an accumulator.
There is a fancy expression in there with $objectToArray and $arrayToObject, but really all that is doing is removing the "cars" field from the "agency" content, just as the "$first": "$cars" is keeping that data separate.
Back to something closer to the desired output, the other main thing is to $sort the results so the "nearest" results are the ones listed first, just as the initial goal was all along. This is where you actually use the "distance" value which was added to the document in the original $geoNear stage.
At this point you are nearly there, and all that is needed is to reform the document into the expected output shape. The final $replaceRoot does this by taking the "car" value from the earlier $group output and promoting it to the top level object to return, and "merging" in the "agency" field to appear as part of the Car itself. Clearly $mergeObjects does the actual "merging".
That's it. It does work, but you may have spotted the problem that you don't actually get to say "near to this AND with this other constraint" technically as part of a single query. And a funny thing about "nearest" results is they do have an in-buit "limit" on results they should return.
And that is basically in the next topic to discuss.
Changing the Model
Whilst all the above is fine, it's still not really perfect and has a few problems. The most notable problem should be that it's quite complex and that "joins" in general are not good for performance.
The other considerable flaw is that as you might have gathered from the "query" parameter on the $geoNear stage, you are not really getting the equivalent of both conditions ( find nearest agency to AND car has disponible: true ) since on separate collections the initial "near" does not consider the other constraint.
Nor can this even be done from the original order just as was intended, and again comes back to the problem with populate() here.
So the real issue unfortunately is design. And it may be a difficult pill to swallow, but the current design which is extremely "relational" in nature is simply not a good fit for MongoDB in how it would handle this type of operation.
The core problem is the "join", and in order to make things work we basically need to get rid of it. And you do that in MongoDB design by embedding the document instead of keeping a reference in another collection:
const carSchema = new Schema({
name: { type: String, required: true},
agency: {
name: { type: String, required: true},
location: {
type: {
type: String,
enum: ['Point'],
default: 'Point'
},
coordinates: {
type: [Number],
required: true
}
}
}
}, { timestamps: true });
In short "MongoDB is NOT a relational database", and it also does not really "do joins" as the sort of itegral constraint over a join you are looking for simply is not supported.
Well, it's not supported by $lookup and the ways it will do things, but the official line has been and will always be that a "real join" in MongoDB is embedded detail. Which simply means "if it's meant to be a constraint on queries you want to do, then it belongs in the same document".
With that redesign the query simply becomes:
Car.find({
disponible: true,
"agency.location": {
$near: {
$geometry: {
coordinates: [ latitude , longitude ]
},
}
}
})
YES, that would mean that you likely duplicate a lot of information about an "agency" since the same data would likely be present on many cars. But the facts are that for this type of query usage, this is actually what MongoDB is expecting you to model as.
Conclusion
So the real choices here come down to which case suits your needs:
Accept that you are possibly returning less than the expected results due to "double filtering" though the use of a $geoNear and $lookup combination. Noting that $geoNear will only return 100 results by default, unless you change that. This can be an unreliable combination for "paged" results.
Restructure your data accepting the "duplication" of agency detail in order to get a proper "dual constraint" query since both criteria are in the same collection. It's more storage and maintenance, but it is more performant and completely reliable for "paged" results.
And of course if it's neither acceptable to use the aggregation approach shown or the restructure of data, then this can only show that MongoDB is probably not best suited to this type of problem, and you would be better off using an RDBMS where you decide you must keep normalized data as well as be able to query with both constraints in the same operation. Provided of course you can choose an RDBMS which actually supports the usage of such GeoSpatial queries along with "joins".

Resources