I want to enable profiling on one of my MongoDB databases, via the node-mongodb-native driver.
However there doesn't seem to be a Db.setProfilingLevel() method (apart from on the Admin DB).
I've tried using db.command({setProfilingLevel: 2}) but get no such cmd: setProfilingLevel.
Works fine through the mongo shell with db.setProfilingLevel(2)
I see what you mean about the methods, but I think the issue with the db.command attempt is that you are trying to run a shell helper as a command rather than the command itself. The actual command is this format:
// get current levels
db.runCommand({ profile : -1 })
// set the level to log slow ops
db.runCommand({ profile : 1 })
// set to log slow ops and change the threshold to 200ms
db.runCommand({ profile : 1, slowms : 200 })
//revert to defaults
db.runCommand({ profile : 0, slowms : 100 })
So, if you try passing the relevant value into db.command that should work.
Related
When I insert records into my local dynamoDb table via typeDORM in a lambda, it inserts the record with the wrong entity information. For example the GSI1PK
GSI1: {
partitionKey: 'PRO#{{primary_key}}',
sortKey: 'PRO#{{primary_key}}#YEAR#{{year}}',
type: INDEX_TYPE.GSI,
},
of a Pro record should be (and is when run the code as an individual node file) PRO#PROCUREMENT_2022 but when I run the same code as a lambda it saves the GSI1PK as an IdList entity LIST#PROLIST_2022. I was able to find out that depending on which Entity I added last in my typedORM.createConnection function, that was the where the GSI1PK information would be built. Is there a reason that the createConnection function is giving different results when the code is run from a lambda vs as a standalone node file?
createConnection({
entities : [Procurement, IdList],
name : 'default',
table : testTable,
}) ;
This code works when it is run as a standalone node file and is able to handle the Procurement and IdList entities at the same time.
#Greesemonkey3
Hi, This is the creator of TypeDORM.
There should not be any difference in the behavior when running it inside of a stand-alone node server or in lambda.The entity inserted should always be the same regardless of runtime chosen.
Can you please raise this as an issue in the official github repository, so that it can be tracked better?
I have an AWS instance with nodejs and mongod running. The collection I am trying to query from has roughly 289248 documents in there.
Here is the code I am using to query my data:
var collection = db.collection('my_collection');
collection.find({ $and: [
{"text" : {$regex : ".*"+keyword+".*"}},
{"username" : username}
] }).limit(10).toArray(function(err, docs) {
Now originally, I was having issues querying just a username collection.find({"username":username}) because there are so many entries in mongo. So I started limiting my code and in the mongo console, I can set a limit of 30 and it results the results I am looking for.
However, then when I run this application on nodejs, when I query this command, it crashes my mongod service and I have to restart it. On the node server, limit of 1 works fine but limit of 5 does not. I can't simply use limit of 1 if there are many more results in the database. What can I do?
Does not using $and make a difference? :
collection.find({text:{$regex:".*"+keyword+".*"},username: username})
I also wonder if the 'text' is a text index in which case it should be '$text'.
I also note that you use two variables in your query expression, and wonder if you have verified that those variables are defined.
I have a large MongoDB collection, containing more than 2GB of raw data and I use a very simple query to fetch a specific document from the collection by its Id. Document sizes currently range from 10KB to 4MB, and the Id field is defined as an index.
This is the query I'm using (with the mongojs module):
db.collection('categories').find({ id: category_id },
function(err, docs) {
callback(err, docs.length ? docs[0] : false);
}).limit(1);
When I execute this query using MongoDB shell or a GUI such as Robomongo it takes approximately 1ms to fetch the document, no matter what its physical size, but when I execute the exact same query on NodeJS the response time ranges from 2ms to 2s and more depending on the amount of data. I only measure the time it takes to receive a response and even in cases where NodeJS waits for more than 500ms the MongoDB profiler (.explain()) shows it took only a single millisecond to execute the query.
Now, I'm probably doing something wrong but I can't figure out what it is. I'm rather new to NodeJS but I had experience with MongoDB and PHP in the past and I never encountered such performance issues, so I tend to think I'm probably abusing NodeJS in some way.
I also tried profiling using SpyJS on WebStorm, I saw there are a lot of bson.deserialize calls which sums up quickly into a large stack, but I couldn't investigate farther because SpyJS always crashes at this point. Probably related but I still have no idea how to deal with it.
Please advise, any leads will be appreciated.
Edit:
This is the result of db.categories.getIndexes():
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "my_db.categories"
},
{
"v" : 1,
"key" : {
"id" : 1
},
"name" : "id_1",
"ns" : "my_db.categories"
}
]
I also tried using findOne which made no difference:
db.collection('categories').findOne({ id: category_id },
function(err, doc) {
callback(err, doc || false);
});
My guess is the .limit(1) is ignored because the callback is provided early. Once find sees a callback it's going to execute the query, and only after the query has been sent to mongo will the .limit modifier try to adjust the query but it's too late. Recode as such and see if that solves it:
db.collection('categories').find({ id: category_id }).limit(1).exec(
function(err, docs) {
callback(err, docs.length ? docs[0] : false);
});
Most likely you'll need to have a combination of normalized and denormalized data in your object. Sending 4MB across the wire at a time seems pretty heavy, and likely will cause problems for any browser that's going to be doing the parsing of the data.
Most likely you should store the top 100 products, the first page of products, or some smaller subset that makes sense for your application in the category. This may be the top alphabetically, most popular, newest, or some other app-specific metric you determine.
When you go about editing a category, you'll use the $push/$slice method to ensure you avoid unbounded array growth.
Then when you actually page through the results you'll do a separate query to the individual products table by category. (Index that.)
I've written about this before here:
https://stackoverflow.com/a/27286612/68567
Basically the query works in mongo but not in sails controller:
db.membermodel.find({identifier:{$in:["2","3","4"]}); // works
MemberModel.find({
identifier:{$in:["2","3","4"]},
}).then(function(members){
// doesn't work
});
data returned:
{ "_id" : ObjectId("52d1a484f2b5e88cb5d4072c"), "identifier" : "2", "deviceToken" : "token2"}
{ "_id" : ObjectId("52d1a487f2b5e88cb5d4072d"), "identifier" : "3", "deviceToken" : "token3"}
Thanks,
Mars
This isn't the way to do in queries with Waterline. You simply set the attribute you're selecting to the array value:
MemberModel.find({
identifier:["2","3","4"]
}).exec(function(err, members){
...
});
If you really need to use low-level Mongo features, you can get an instance of the native collection with
MemberModel.native(function(err, collection) {
//do native mongo driver stuff with collection
}
Hard to understand how the model is being queried but I suggest you to "spy" what's Mongo getting as the MVC framework query, due to this is not a direct query to Mongo, it's passed through the framework.
I'm quite sure you're still developing so you have access to your mongo instance, restart it using full profile (the trick is everything is slow under 1ms)
mongod --profile=1 --slowms=1 &
Tail the resulting log which normally is in
/var/log/mongodb/mongodb.log
with the command
tail -f /var/log/mongodb/mongodb.log
Send your query again and check what MongoDb is executing.
I'm trying to utilize MongoDB 2.4 experimental text search feature from within nodejs. The only problem is, native nodejs mongo drivers don't seem to support collection-level runCommand, as far as I can tell.
The Mongo shell syntax looks like this:
db.collection.runCommand( "text", { search : "Textvalue" } );
There is a db.command / db.executeDbCommand function it appears, but I don't know how to choose a collection and run the text command using it (if it is possible), as it needs to be on the collection level and not the db level.
Any help would be appreciated
I managed to get it working through a combination of Asya Kamsky's comment, by utilizing
this.db.command({text:"collection" , search: "phrase" }).
The problem was it isn't returned like a standard result so a toArray() call was failing. Instead, I put the callback directly inside:
this.db.command({text:"collection" , search: "phrase" }, function(err, cb){