Meteor last executed query in mongodb? - node.js

Meteor Mongo and Mongodb query is doest same. I am using external Mongodb. so I need to debug my query. Is their any way to find last executed query in Mongo?

Don't know if this works in meteor mongo -but you seem to be using an external mongo - presumably you set up profiling with a capped collection, so that the collection never grows over a certain size. If you only need the last op, then you make the size pretty much smaller than this.
db.createCollection( "system.profile", { capped: true, size:4000000 } )
The mongo doc is here: http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
From the mongo docs:
To return the most recent 10 log entries in the system.profile
collection, run a query similar to the following:
db.system.profile.find().limit(10).sort( { ts : -1 } ).pretty()
Since it's sorted inversely by time, just take the first record from the result.
Otherwise you could roll your own with a temporary client-only mongo collection:
Queries = new Mongo.Collection(null);
Create an object containing your query, cancel the last record and insert the new one.

Related

mongoose query using sort and skip on populate is too slow

I'm using an ajax request from the front end to load more comments to a post from the back-end which uses NodeJS and mongoose. I won't bore you with the front-end code and the route code, but here's the query code:
Post.findById(req.params.postId).populate({
path: type, //type will either contain "comments" or "answers"
populate: {
path: 'author',
model: 'User'
},
options: {
sort: sortBy, //sortyBy contains either "-date" or "-votes"
skip: parseInt(req.params.numberLoaded), //how many are already shown
limit: 25 //i only load this many new comments at a time.
}
}).exec(function(err, foundPost){
console.log("query executed"); //code takes too long to get to this line
if (err){
res.send("database error, please try again later");
} else {
res.send(foundPost[type]);
}
});
As was mentioned in the title, everything works fine, my problem is just that this is too slow, the request is taking about 1.5-2.5 seconds. surely mongoose has a method of doing this that takes less to load. I poked around the mongoose docs and stackoverflow, but didn't really find anything useful.
Using skip-and-limit approach with mongodb is slow in its nature because it normally needs to retrieve all documents, then sort them, and after that return the desired segment of the results.
What you need to do to make it faster is to define indexes on your collections.
According to MongoDB's official documents:
Indexes support the efficient execution of queries in MongoDB. Without indexes, MongoDB must perform a collection scan, i.e. scan every document in a collection, to select those documents that match the query statement. If an appropriate index exists for a query, MongoDB can use the index to limit the number of documents it must inspect.
-- https://docs.mongodb.com/manual/indexes/
Using indexes may cause increased collection size but they improve the efficiency a lot.
Indexes are commonly defined on fields which are frequently used in queries. In this case, you may want to define indexes on date and/or vote fields.
Read mongoose documentation to find out how to define indexes in your schemas:
http://mongoosejs.com/docs/guide.html#indexes

node cluster: ensure only one instance of a function is running at a time

I have a cluster of node worker servers that handle hitting an api and inserting data into a mongo db. The problem I am having is that one of these functions appears to every so often insert two copies of the same document. It checks if the document has already been created with a query like so:
gameDetails.findOne({ gameId: gameId }, function(err, gameCheck) {
if (!gameCheck) { //insert the document };
How can I ensure that this function always is only running one instance at a time. Alternatively, if I have not deduced the actual root problem, what could cause a mongo query like this to sometimes result in multiple of the same document, containing the same gameId, to be inserting?
findOne is being called multiple times before the document has had time to be inserted, i.e. something like the following is happening:
findThenInsert()
findThenInsert()
findThenInsert()
// findOne returns null, insert called
// findOne returns null, insert called
// document gets inserted
// findOne returns a gameCheck
// document gets inserted
You should use a unique index to prevent duplicates. Then, your node instances could optimistically call insert straight away, and simply handle the error if they were too late, which is similar to your 'if found do nothing' logic.
Alternatively if you don't mind the document being updated each time, you can use the upsert method, which is atomic:
db.collection.update(query, update, {upsert: true})
Also see
MongoDB atomic "findOrCreate": findOne, insert if nonexistent, but do not update

Mongoose is not recreating the index collection is dropped

I know I should removeAll documents instead of dropping the collection. But why does mongoose not recreate the index after the collection is dropped?
I have a schema in which one of the fields has unique:true set.
The index is created when I start the server. But if i drop the collection, and new records come, the collection is recreated and records are inserted. BUT, the index for the unique field is not created again.
Why is it so? How do I ask it to recreate the index?
It has nothing to do with unique. If you drop the index in mongodb, you have to restart the node.
When your application starts up, Mongoose automatically calls
ensureIndex for each defined index in your schema. Mongoose will call
ensureIndex for each index sequentially, and emit an 'index' event on
the model when all the ensureIndex calls succeeded or when there was
an error. While nice for development, it is recommended this behavior
be disabled in production since index creation can cause a significant
performance impact.
Nevertheless, in development mode, you can recreate indexes in method (for example method tied to route or event) with:
(Mongodb > 3.0) db.collection.createIndex({})
(Mongodb < 3.0) db.collection.ensureIndex({})
With Mongoose it looks like this:
Model.ensureIndexes(function (err) {
if (err) {
return res.end(err);
}
});

Mongodb: how to compare DB to new data

Each week I receive a new copy of source data (8500, and growing, records approx and with an id field that Mongo uses as _id) and I want to look for (and save, while keeping the old data) updated information (about 30 changes/additions per month are likely). I'm trying to work out the best approach.
My first thought was, for each entry in new data, get the DB entry with that _id, compare, and update the data where changed. But that results in 8500 asynchronous calls over the net (to mongolab) + 30 upserts where new/changed data needs to be saved.
So, the alternative is to download everything at the outset. But then I end up with an Array from Mongo and would need to do Array.find each time to get the element that matches with the new data.
Is there a Mongo command to return the results of .find({}) as a Javascript Object keyed by _id? Or, does it otherwise make sense to take the raw array form Mongo and covert it myself to an object
I will store :
id + version + date + datas
For each update :
Make a dump of prod DB for local usage
work offline, in a local mongoDB (because you don't want to launch 9000 query over the web)
for each line
compare datas to mongo datas
if modifications ==true, will store a new/first (id+version)
else skip;
make a dump of your local DB
installl dump to production environnement
mongodb doc dump

List recent operations on mongodb

I'm using MongoDB with Node.js framework.
There is a weird behavior that some documents are not getting inserted into db, though from orm's point of view there are no errors: err = null in callback of Collection.create() and fresh document with _id is returned. When I try to search by that _id in db - no document is found.
I tried to manually insert new document to db and it was successfull.
Is there a way I can trace these operations from db's point of view? Some command to list recent requests and their results..?
You can enable profiling for all operations:
db.setProfilingLevel(2)
Then, look at system.profile collection to see what's happen. system.profile is a capped collection that can be searched as any other collection. Profiling can be noisy, and eventually you should have to change the size of the system.profile collection
db.setProfilingLevel(0)
db.system.profile.drop()
db.createCollection( "system.profile", { capped: true, size:4000000 } )
db.setProfilingLevel(2)
The most notable way of tracking errors within MongoDB is to use the --diaglog option: http://docs.mongodb.org/manual/reference/program/mongod/#cmdoption--diaglog with maybe a level of 3, however 1 might be enough for you.
As noted by #Neil this has unfortunately become deprecated as of 2.6.
The only way currently is to write out ALL operations MongoDB performs, via #Rauls answer, and then use a query like:
db.system.profile.find({op:{$in:['update', 'insert', 'remove']}});
and possibly resize the capped collection used for profiling: http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/#profiler-overhead to capture the amount you want.

Resources