Query time mongoose occasionally taking 3-4 seconds - node.js

I've written a back end node server for a multiplayer game I'm developing and most of the time each request takes about 20-100ms to resolve. However, sometimes (Maybe 1 out of 50 requests) I will do the same request and it will take 2000+ms to resolve.
The server is written entirely in node.js and is hosted on heroku. I am using mongoose to make the calls to the database.
Here is a screenshot of the logs, at the top you can see how queries normally function. The request comes in at 19:03:03.68 and the response is sent out at 19:03:03.73, saving all the data finishes at at 19:03:03.74. Heroku logs the request as taking 58ms which is the desired and expect outcome.
Below that is when the issue occurs. You can see multiple requests come in from two separate clients (Each client sends ~1 request per second which is correct) However the requests build up and after about 2000-5000ms they will all quickly resolve one after another. I’ve tried narrowing down the issue without much luck, but I believe it’s related to when I query the database as you can see multiple requests come in but the first query to the database doesn’t actually resolve until around 2300ms later. As far as I can tell these requests are identical to the ones that resolve in 20-100ms and occur completely at random.
The actual code is similar to this on the server (Simplified for the sake of this question):
console.log (“request received”);
Game.findOne({‘id’: gameID}, function(err, theGame){
console.log("First Query");
I also opened up the mongo shell for the database to look for queries taking an excessive amount of time (>2000ms) with this code:
db.system.profile.find( {millis: {$gt : 2000} } ).sort( { ts: 1} );
Here are the slightly modified results which should include everything relevant:
{ "op" : "update", "ns" : "theDb.players", "query" :
{ "_id" : ObjectId("572b8eb242d70903005df0df")
}, "updateobj" :
{ "$set" :
{ "lastSeen" : ISODate("2016-05-05T18:19:30.761Z"), "timeElapsed" : 16
}
}, "nscanned" : 1, "nscannedObjects" : 1, "nMatched" : 1, "nModified" : 1, "fastmod" : true, "keyUpdates" : 0, "writeConflicts" : 0, "numYield" : 0, "locks" :
{ "Global" :
{ "acquireCount":
{ "r" : NumberLong(2), "w" : NumberLong(2) }
}, "MMAPV1Journal" :
{ "acquireCount" :
{ "w" : NumberLong(2) }, "acquireWaitCount" :
{ "w" : NumberLong(1) }, "timeAcquiringMicros" :
{ "w" : NumberLong(7294179) }
}, "Database" :
{ "acquireCount" :
{ "w" : NumberLong(2) }
}, "Collection" :
{ "acquireCount" :
{ "W" : NumberLong(1) }
}, "oplog" :
{ "acquireCount" :
{ "w" : NumberLong(1) }
}
}, "milli" : 2298, "execStats" : {}, "ts" : ISODate("2016-05-05T18:19:33.060Z")
Second Result:
{ "op" : "update", "ns" : "theDb.connections", "query" :
{ "_id" : ObjectId("572b8eaf42d70903005df0dd")
}, "updateobj" :
{ "$set" :
{ "internalCounter" : 3, "lastCount" : 3, "lastSeen" : ISODate("2016-05-05T18:19:30.761Z"), "playerID" : 128863276517, "sinceLast" : 0
}
}, "nscanned" : 1, "nscannedObjects" : 1, "nMatched" : 1, "nModified" : 1, "keyUpdates" : 0, "writeConflicts" : 0, "numYield" : 0, "locks" :
{ "Global" :
{ "acquireCount" :
{ "r" : NumberLong(2), "w" : NumberLong(2)
}
}, "MMAPV1Journal" :
{ "acquireCount" :
{ "w" : NumberLong(2) }, "acquireWaitCount" :
{ "w" : NumberLong(1) }, "timeAcquiringMicros" :
{ "w" :NumberLong(7294149) }
}, "Database" :
{ "acquireCount" :
{ "w" : NumberLong(2) }
}, "Collection" :
{ "acquireCount" :
{ "W" : NumberLong(1) }
}, "oplog" :
{ "acquireCount" :
{ "w" : NumberLong(1) }
}
}, "millis" : 2299, "execStats" : {},"ts" : ISODate("2016-05-05T18:19:33.061Z")
I really need to ensure the latency for any request never exceeds 500ms otherwise it extremely irritating in the game itself. I’m really at a loss for what might be causing this and how to figure out more.
I'm assuming the cause for the issue is that timeAcquiringMicros is so long. I'm unsure of what is causing this though.
*Note, the client is requesting the data with just standard http requests, I’m not currently using any sockets.

Alright, I've finally solved the issue. The problem wasn't actually connected to anything that I had done. I was using the sandbox plan that mlab offers in connection to heroku which had my application competing for processing time with other people also using the sandbox plan. Their queries were slowing down the database causing those spikes in response times.
The solution: I had to upgrade to their shared cluster plan. Since upgrading I haven't had any irregularities in query times.

Related

Why doesn't MongoDB cursor.maxTimeMS work?

Maybe I'm missing something, but according to the documentation and all the posts online, setting
cursor.maxTimeMS(1000).toArray(...)
should time out after 1000ms, and MongoDB should kill the operation after timeout.
But none of this is happening.
First, there is no timeout. It keeps going.
Second, I check db.currentOp() and the operation is still there, eating up all the memory. This later adds up and crashes the database with OOM.
Anyway running db.currentOp() after several minutes of no response prints:
{
"inprog" : [
{
"host" : "db2:27017",
"desc" : "conn20",
"connectionId" : 20,
"client" : "127.0.0.1:59214",
"clientMetadata" : {
"driver" : {
"name" : "nodejs",
"version" : "3.1.4"
},
"os" : {
"type" : "Linux",
"name" : "linux",
"architecture" : "x64",
"version" : "4.15.0-30-generic"
},
"platform" : "Node.js v8.10.0, LE, mongodb-core: 3.1.3"
},
"active" : true,
"currentOpTime" : "2018-09-14T00:10:29.903+0000",
"opid" : 11056,
"lsid" : {
"id" : UUID("78a2d853-30bf-4d6d-a208-0a150d9bf8be"),
"uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=")
},
"secs_running" : NumberLong(649),
"microsecs_running" : NumberLong(649968360),
"op" : "command",
As you can see, this has been running for 649 seconds, even though I explicitly specified 1000ms.
What is going on here? I've been pulling my hair out for two days and can't figure this out.
I had the same issue and had to update the mongodb driver from 3.1.1 to 3.3.5 and it worked like a charm!

mongodb taking too much time for old entries

i am new in mongodb and i am facing an issue, i have around millions of documents in my collectionand i am trying to find single entry using findOne({}) command and when i am trying to find recent entries then response comes in miliseconds but when i am trying to fetch older entries around 600 millionth document then it takes around 2 minutes on mongo shell and my node server gives
{ MongoErro : connection 1 to 127.0.0.1:27017 timed out }
and my nodejs server sends an empty response. can any one tell me what should i do to resolve this issueThanks in advance
explain gives me
db.contacts.find({"phoneNumber":"9165900137"}).explain("executionStats")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "meanApp.contacts",
"indexFilterSet" : false,
"parsedQuery" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"direction" : "forward"
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 1,
"executionTimeMillis" : 321188,
"totalKeysExamined" : 0,
"totalDocsExamined" : 495587806,
"executionStages" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 295230,
"works" : 495587808,
"advanced" : 1,
"needTime" : 495587806,
"needYield" : 0,
"saveState" : 3871779,
"restoreState" : 3871779,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 495587806
}
},
"serverInfo" : {
"host" : "li1025-15.members.linode.com",
"port" : 27017,
"version" : "3.2.16",
"gitVersion" : "056bf45128114e44c5358c7a8776fb582363e094"
},
"ok" : 1
}
As indicated in the explain plan results, the current query is doing Collection Scan. This means it has to scan every document in collection to produce the match and you have got about half a billion documents.
Try adding this index and it might take a bit to create it.
db.contacts.createIndex( { phoneNumber: 1 }, { background: true } )
Run the query once the index creation is successful, you must see a dramatic improvement in performance. To be certain whether index got picked up, try explain again and it should no longer say COLLSCAN.

MongoDB-Query Optimization

I have a collection with a sub-document consisting of more than 40K records.
My aggregate query takes about 300 secs. I have tried optimizing the same using compound as well as multi-key indexing, which completes in 180 secs.
I still require a reduced query time execution.
here is my collection:
{
"_id" : ObjectId("545b32cc7e9b99112e7ddd97"),
"grp_id" : 654,
"user_id" : 2,
"mod_on" : ISODate("2014-11-06T08:35:40.857Z"),
"crtd_on" : ISODate("2014-11-06T08:35:24.791Z"),
"uploadTp" : 0,
"tp" : 1,
"status" : 3,
"id_url" : [
{"mid":"xyz12793"},
{"mid":"xyz12794"},
{"mid":"xyz12795"},
{"mid":"xyz12796"}
],
"incl" : 1,
"total_cnt" : 25,
"succ_cnt" : 25,
"fail_cnt" : 0
}
and following is my query
db.member_id_transactions.aggregate([ { '$match':
{ id_url: { '$elemMatch': { mid: 'xyz12794' } } } },
{ '$unwind': '$id_url' },
{ '$match': { grp_id: 654, 'id_url.mid': 'xyz12794' } } ])
has anyone faced the same issue?
here's the o/p for aggregate query with explain option
{
"result" : [
{
"_id" : ObjectId("546342467e6d1f4951b56285"),
"grp_id" : 685,
"user_id" : 2,
"mod_on" : ISODate("2014-11-12T11:24:01.336Z"),
"crtd_on" : ISODate("2014-11-12T11:19:34.682Z"),
"uploadTp" : 1,
"tp" : 1,
"status" : 3,
"id_url" : [
{"mid":"xyz12793"},
{"mid":"xyz12794"},
{"mid":"xyz12795"},
{"mid":"xyz12796"}
],
"incl" : 1,
"__v" : 0,
"total_cnt" : 21406,
"succ_cnt" : 21402,
"fail_cnt" : 4
}
],
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("545c8d37ab9cc679383a1b1b")
}
}
One way to reduce the number of records being filtered further is to include the field grp_id, in the first $match operator.
db.member_id_transactions.aggregate([
{$match:{ "id_url.mid": 'xyz12794',"grp_id": 654 } },
{$unwind: "$id_url" },
{$match: { "id_url.mid": "xyz12794" } }
])
See how the performance is now. Add grp_id to the index to get better response time.
The above aggregation query though it works, is unnecessary. since you are not altering the structure of the document, and you expect only one element in the array to match the filter condition, you could just use a simple find and project.
db.member_id_transactions.find(
{ "id_url.mid": "xyz12794","grp_id": 654 },
{"_id":0,"grp_id":1,"id_url":{$elemMatch:{"mid":"xyz12794"}},
"user_id":1,"mod_on":1,"crtd_on":1,"uploadTp":1,
"tp":1,"status":1,"incl":1,"total_cnt":1,
"succ_cnt":1,"fail_cnt":1
}
)

arangodb replication applier stopped with error 1413: no start tick

4:32 AM (2 minutes ago)
i followed the replication (master-slave) setup guide, and got data replicated fine, but the applier is stopped with the following error. i searched manual and googled without finding much other than the error message itself. really appreciate if you could provide some clues as how to troubleshoot this. thanks!
arangosh [_system]> require("org/arangodb/replication").applier.state();
{
"state" : {
"running" : false,
"lastAppliedContinuousTick" : null,
"lastProcessedContinuousTick" : null,
"lastAvailableContinuousTick" : null,
"progress" : {
"time" : "2014-04-15T20:20:13Z",
"message" : "applier stopped",
"failedConnects" : 0
},
"totalRequests" : 5,
"totalFailedConnects" : 0,
"totalEvents" : 0,
"lastError" : {
"time" : "2014-04-15T20:20:13Z",
"errorMessage" : "no start tick",
"errorNum" : 1413
},
"time" : "2014-04-15T20:37:50Z"
},
"server" : {
"version" : "2.0.4",
"serverId" : "83323931320193"
},
"endpoint" : "tcp://master:8529",
"database" : "_system"
}
This might happen if you start the applier for the very first time without specifying a tick. Use
require("org/arangodb/replication").applier.start(1)
instead.

Bulk mongodb insert in Meteor or Node

MongoDB supports bulk insert http://docs.mongodb.org/manual/core/bulk-inserts/
I have tried it in Meteor collection:
Orders.insert([
{ "cust_id" : "A123", "amount" : 500, "status" : "A", "_id" : "iZXL7ewBDALpic8Fj" },
{ "cust_id" : "A123", "amount" : 250, "status" : "A", "_id" : "zNrdBAxxeNZQ2yrL9" },
{ "cust_id" : "B212", "amount" : 200, "status" : "A", "_id" : "vev3pjJ8cmiDHHxe4" },
{ "cust_id" : "A123", "amount" : 300, "status" : "D", "_id" : "BBLngRhS76DgeHJQJ" }
]);
but it creates just
{ "0" : { "cust_id" : "A123", "amount" : 500, "status" : "A", "_id" : "iZXL7ewBDALpic8Fj"},
"1" : { "cust_id" : "A123", "amount" : 250, "status" : "A", "_id" : "zNrdBAxxeNZQ2yrL9" },
"2" : { "cust_id" : "B212", "amount" : 200, "status" : "A", "_id" : "vev3pjJ8cmiDHHxe4" },
"3" : { "cust_id" : "A123", "amount" : 300, "status" : "D", "_id" : "BBLngRhS76DgeHJQJ" },
"_id" : "6zWayeGtQCdfS65Tz" }
I need it for performance testing purposes. I need to fill and test database with thousands of testing items. I do inserts in foreach, but it takes too long to fill database.
Is here any workaround? Or can we expect Meteor will support this in next versions?
You could use exec (nodejs docs) to run a mongo script inside of meteor inside of a Meteor.startup on the server.
Example:
Meteor.startup(function () {
var exec = Npm.require('child_process').exec;
exec('mongo localhost:27017/meteor path-to/my-insert-script.js', function ( ) {
// done
});
});
Not optimum, but I think it's your best bet for now. You can also use the command line option --eval against Mongo in exec and pass the insert statement as a string to exec. That might look like this:
Meteor.startup(function () {
var exec = Npm.require('child_process').exec;
exec('mongo localhost:27017/meteor --eval \'db.Orders.insert(' + JSON.stringify(arrOfOrders) + ')\'', function ( ) {
// done
});
});
When inserting a lot of data into the DB, e.g., in a forEach loop, you want to make sure that there is no reactive content on your page that depends on it. Otherwise the reactive rerendering is going to slow your client down tremendously. You can easily insert several thousand document into a collection in a fraction of a second when all templates are disabled, while the same operation can take several minutes with your CPU at 100% on both the client as the server if there is relevant reactivity happening.
You may want to add a condition to any template whose content depend on this data such as:
Template.myTemplate.items = function() {
if (Session.get("active")) {
return Order.find();
}
}
Then you can deactivate all reactive rerendering before your forEach loop and reactivate it again afterwards (Session.set("active", false)).
You could use rawCollection which is node mongodb driver implemetation in Meteor.Collection.
await Orders.rawCollection().insertMany(arrOfOrders)
It works on 70M data in my case.
(await makes it synced so you should consider to use it or not for your purpose.)

Resources