Mongodb sort with case insensitive manner - node.js

I am struct very hard at one project in nodejs(express) with mongodb as database. When i get all data using sort() it returns data in wrong manner, so is there way to get it properly format as i am expecting as below:
If we have three record in DB:
---------------------
id | Name | aga
---------------------
1 | atul | 21
---------------------
2 | Bhavik | 22
---------------------
3 | Jay | 25
What i am getting at present is:
2,3,1 series data
What i expect is to come is:
1,2,3
It means is to ignore the case while sorting is it possible without adding new column.

You need to use collation here with locale: "en"
db.collection.find({}).collation({ locale: "en" }).sort({ name: 1 })
So for the below document
{ "_id" : 1, "name" : "Bhavik" }
{ "_id" : 2, "name" : "Jay" }
{ "_id" : 3, "name" : "atul" }
You will get
{ "_id" : 3, "name" : "atul" }
{ "_id" : 1, "name" : "Bhavik" }
{ "_id" : 2, "name" : "Jay" }

Create the collection with a default collation by this way you can order by any property with case insensitive.
db.createCollection("collection_name", { collation: { locale: 'en_US', strength: 2 } } )
db.getCollection('collection_name').find({}).sort( { 'property_name': -1 } )
More info: https://docs.mongodb.com/manual/core/index-case-insensitive/

You can pass collation: { locale: 'en' } directly in the options parameter of the find method:
db.collection.find({ ...query }, {
sort: ...,
limit: ...,
collation: { locale: 'en' }
}

Related

$add,$subtract aggregation-framework in mongodb

Hi i am mentioning the sample data
///collection - test////
{
"_id" : {
"date" : ISODate("2020-02-11T17:00:00Z"),
"userId" : ObjectId("5e43e5cdc11f750864f46820"),
"adminId" : ObjectId("5e43de778b57693cd46859eb")
},
"outstanding" : 212.39999999999998,
"totalBill" : 342.4,
"totalPayment" : 130
}
{
"_id" : {
"date" : ISODate("2020-02-11T17:00:00Z"),
"userId" : ObjectId("5e43e73169fe1e3fc07eb7c5"),
"adminId" : ObjectId("5e43de778b57693cd46859eb")
},
"outstanding" : 797.8399999999999,
"totalBill" : 797.8399999999999,
"totalPayment" : 0
}
I need to structure a query which does following things-
I need to calculate the actualOutstanding:[(totalBill+outstanding)-totalPayment],
I need to save this actualOutstanding in the same collection & in the same document according to {"_id" : {"date","userId", "adminId" }}
NOTE: userId is different in both the documents.
Introduced in Mongo version 4.2+ pipelined updates, meaning we can now use aggregate expressions to update documents.
db.collection.updateOne(
{
"adminId" : ObjectId("5e43de778b57693cd46859eb")
'_id."userId" : ObjectId("5e43e73169fe1e3fc07eb7c5"),
'_id.date': ISODate("2020-02-11T18:30:00Z"),
},
[
{ '$set': {
actualOutstanding: {
$subtract:[ {$add: ['$totalBill','$outstanding']},'$totalPayment']
}
} }
]);
For any other Mongo version you have to split it into 2 actions, first query and calculate then update the document with the calculation.

Compare two Collections in MongoDB and show the differences

I'm trying to compare two collections in mongodb. I have Collection A and Collection B and I only want to show the Differences. How is this done? I thought it could be done with the Aggregation Framework but I did not get the expected values. I just want to see which Document in Collection A is not the same as in Collection B.
Collection: A
{
"_id" : ObjectId("x"),
"p" : [
{
"t" : 1,
"p" : 123
},
{
"t" : 2,
"p" : 123
}
]
},
{
"_id" : ObjectId("y"),
"p" : [
{
"t" : 1,
"p" : 234
},
{
"t" : 2,
"p" : 234
}
]
}
Collection: B
{
"_id" : ObjectId("x"),
"p" : [
{
"t" : 1,
"p" : 123
},
{
"t" : 2,
"p" : 538458 // OTHER VALUE HERE
}
]
},
{
"_id" : ObjectId("y"),
"p" : [
{
"t" : 1,
"p" : 234
},
{
"t" : 2,
"p" : 234
}
]
}
You could export each collection by using mongoexport, this will create a file with all the documents, but make sure you omit the _id (documents maybe identical but will have different ids):
mongoexport --db db_name --collection collection_name | sed '/"_id":/s/"_id":[^,]*,//' > file_name.json
Then you can compare the two files using diff.

Formatting the returned object from MongoDB/Mongoose group by

I have a MongoDB with documents of the form:
{
...
"template" : "templates/Template1.html",
...
}
where template is either "templates/Template1.html", "templates/Template2.html" or "templates/Template3.html".
I'm using this query to group by template and count how many times each template is used:
var group = {
key:{'template':1},
reduce: function(curr, result){ result.count++ },
initial: { count: 0 }
};
messageModel.collection.group(group.key, null, group.initial, group.reduce, null, true, cb);
I'm getting back the correct result, but it's formatted like this:
{
"0" : {
"template" : "templates/Template1.html",
"count" : 2 },
"1" : {
"template" : "templates/Template2.html",
"count" : 2 },
"2" : {
"template" : "templates/Template3.html",
"count" : 1 }
}
I was wondering if it's possible to change the query so that it returns something like:
{
"templates/Template1.html" : { "count" : 2 },
"templates/Template2.html" : { "count" : 2 },
"templates/Template3.html" : { "count" : 1 }
}
or even:
{
"templates/Template1.html" : 2 ,
"templates/Template2.html" : 2 ,
"templates/Template3.html" : 1
}
I would rather change the query and not parse the returned object from the original query.
As mentioned by Blakes Seven in the comments you could use aggregate() instead of group() to achieve nearly your desired result.
messageModel.collection.aggregate([
{ // Group the collection by `template` and count the occurrences
$group: {
_id: "$template",
count: { $sum: 1 }
}
},
{ // Format the output
$project: {
_id: 0,
template: "$_id",
count: 1
}
},
{ // Sort the formatted output
$sort: { template: 1 }
}
]);
The output would look like this:
[
{
"template" : "templates/Template1.html",
"count" : 2 },
{
"template" : "templates/Template2.html",
"count" : 2 },
{
"template" : "templates/Template3.html",
"count" : 1 }
}
]
Again, as stated by Blakes in the comments the database can only output an array of objects rather than a solitary object. That would be a transformation that you would need to do outside of the database.
I think it deserves to be restated that this transformation produces an anti-pattern and should be avoided. An object key name provides the context or description for the value. Using a file location as a key name would be a fairly vague description whereas 'template' provides a bit more information about what that value represents.

MongoDB-Query Optimization

I have a collection with a sub-document consisting of more than 40K records.
My aggregate query takes about 300 secs. I have tried optimizing the same using compound as well as multi-key indexing, which completes in 180 secs.
I still require a reduced query time execution.
here is my collection:
{
"_id" : ObjectId("545b32cc7e9b99112e7ddd97"),
"grp_id" : 654,
"user_id" : 2,
"mod_on" : ISODate("2014-11-06T08:35:40.857Z"),
"crtd_on" : ISODate("2014-11-06T08:35:24.791Z"),
"uploadTp" : 0,
"tp" : 1,
"status" : 3,
"id_url" : [
{"mid":"xyz12793"},
{"mid":"xyz12794"},
{"mid":"xyz12795"},
{"mid":"xyz12796"}
],
"incl" : 1,
"total_cnt" : 25,
"succ_cnt" : 25,
"fail_cnt" : 0
}
and following is my query
db.member_id_transactions.aggregate([ { '$match':
{ id_url: { '$elemMatch': { mid: 'xyz12794' } } } },
{ '$unwind': '$id_url' },
{ '$match': { grp_id: 654, 'id_url.mid': 'xyz12794' } } ])
has anyone faced the same issue?
here's the o/p for aggregate query with explain option
{
"result" : [
{
"_id" : ObjectId("546342467e6d1f4951b56285"),
"grp_id" : 685,
"user_id" : 2,
"mod_on" : ISODate("2014-11-12T11:24:01.336Z"),
"crtd_on" : ISODate("2014-11-12T11:19:34.682Z"),
"uploadTp" : 1,
"tp" : 1,
"status" : 3,
"id_url" : [
{"mid":"xyz12793"},
{"mid":"xyz12794"},
{"mid":"xyz12795"},
{"mid":"xyz12796"}
],
"incl" : 1,
"__v" : 0,
"total_cnt" : 21406,
"succ_cnt" : 21402,
"fail_cnt" : 4
}
],
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("545c8d37ab9cc679383a1b1b")
}
}
One way to reduce the number of records being filtered further is to include the field grp_id, in the first $match operator.
db.member_id_transactions.aggregate([
{$match:{ "id_url.mid": 'xyz12794',"grp_id": 654 } },
{$unwind: "$id_url" },
{$match: { "id_url.mid": "xyz12794" } }
])
See how the performance is now. Add grp_id to the index to get better response time.
The above aggregation query though it works, is unnecessary. since you are not altering the structure of the document, and you expect only one element in the array to match the filter condition, you could just use a simple find and project.
db.member_id_transactions.find(
{ "id_url.mid": "xyz12794","grp_id": 654 },
{"_id":0,"grp_id":1,"id_url":{$elemMatch:{"mid":"xyz12794"}},
"user_id":1,"mod_on":1,"crtd_on":1,"uploadTp":1,
"tp":1,"status":1,"incl":1,"total_cnt":1,
"succ_cnt":1,"fail_cnt":1
}
)

MongoDB geospatial index, how to use it with array elements?

I would like to get Kevin pub spots near a given position. Here is the userSpots collection :
{ user:'Kevin',
spots:[
{ name:'a',
type:'pub',
location:[x,y]
},
{ name:'b',
type:'gym',
location:[v,w]
}
]
},
{ user:'Layla',
spots:[
...
]
}
Here is what I tried :
db.userSpots.findOne(
{ user: 'Kevin',
spots: {
$elemMatch: {
location:{ $nearSphere: [lng,lat], $maxDistance: d},
type: 'pub'
}
}
},
},
function(err){...}
)
I get a strange error. Mongo tells me there is no index :2d in the location field. But when I check with db.userSpots.getIndexes(), the 2d index is there. Why doesn't mongodb see the index ? Is there something I am doing wrong ?
MongoError: can't find special index: 2d for : { spots: { $elemMatch: { type:'pub',location:{ $nearSphere: [lng,lat], $maxDistance: d}}}, user:'Kevin'}
db.userSpots.getIndexes() output :
{
"0" : {
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mydb.userSpots",
"name" : "_id_"
},
"1" : {
"v" : 1,
"key" : {
"spots.location" : "2d"
},
"ns" : "mydb.usersBoxes",
"name" : "spots.location_2d",
"background" : true,
"safe" : null
}
}
For a similar geospatial app, I transformed the location into GeoJSON:
{
"_id" : ObjectId("5252cbdd9520b8b18ee4b1c3"),
"name" : "Seattle - Downtown",
"location" : {
"type" : "Point",
"coordinates" : [
-122.33145,
47.60789
]
}
}
(the coordinates are in longitude / latitude format. Mongo's use of GeoJSON is described here.).
The index is created using:
db.userSpots.ensureIndex({"location": "2dsphere"})
In my aggregation pipeline, I find matches using:
{"$match":{"location":{"$geoWithin": {"$centerSphere":[[location.coordinates[0], location.coordinates[1]], radius/3959]}}}}
(where radius is measured in miles - the magic number is used to convert to radians).
To index documents containing array of geo data MongoDB uses multi-key index. Multi-key index unwinds document to some documents with single value instead of array before indexing. So the index consider that key field as single value field not array.
Try query it without $elemMatch operator.

Resources