Mongodb query slow response time - node.js

I'm working on a project that uses flexible schemas. I've setup a local mongodb server and am using mongoose inside node.
Having an interesting scaling problem and was wondering if these response times were normal. If a query returns 50 documents, I takes 5-10 seconds for mongo to respond. In the same collection, a query that returns 2 documents is milliseconds.
It's not a slow connection because it's local, was wondering if anyone had an idea as to what was causing this.
I'm using OS X and mongo 3.0.1
Edit: The documents are nearly empty at the moment, with just one or two properties.
Edit: The total number of documents doesn't really matter, just the returned size. If there are 51 documents, 50 like {_id: "...", _schema:"bar"} and 1 {_id:"...", _schema: "foobar" } then collection.find({_schema:"bar"}) takes several seconds and collection.find({_schema:"foobar"}) takes no time.
Explain output:
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "mean-dev.documentmodels",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [ ]
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"$and" : [ ]
},
"direction" : "forward"
},
"rejectedPlans" : [ ]
},
"serverInfo" : {
"host" : "Sams-MBP.local",
"port" : 27017,
"version" : "3.0.1",
"gitVersion" : "nogitversion"
},
"ok" : 1

No, it should not take that much time.
The issue is probably in the operations in your query (projections, sorting, geosearch, grouping, etc). The best way to solve that is by creating an index to speed up such query.
To create an index on _schema field execute that command in mongodb:
db.collection.ensureIndex({"_schema":1});

Related

Why doesn't MongoDB cursor.maxTimeMS work?

Maybe I'm missing something, but according to the documentation and all the posts online, setting
cursor.maxTimeMS(1000).toArray(...)
should time out after 1000ms, and MongoDB should kill the operation after timeout.
But none of this is happening.
First, there is no timeout. It keeps going.
Second, I check db.currentOp() and the operation is still there, eating up all the memory. This later adds up and crashes the database with OOM.
Anyway running db.currentOp() after several minutes of no response prints:
{
"inprog" : [
{
"host" : "db2:27017",
"desc" : "conn20",
"connectionId" : 20,
"client" : "127.0.0.1:59214",
"clientMetadata" : {
"driver" : {
"name" : "nodejs",
"version" : "3.1.4"
},
"os" : {
"type" : "Linux",
"name" : "linux",
"architecture" : "x64",
"version" : "4.15.0-30-generic"
},
"platform" : "Node.js v8.10.0, LE, mongodb-core: 3.1.3"
},
"active" : true,
"currentOpTime" : "2018-09-14T00:10:29.903+0000",
"opid" : 11056,
"lsid" : {
"id" : UUID("78a2d853-30bf-4d6d-a208-0a150d9bf8be"),
"uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=")
},
"secs_running" : NumberLong(649),
"microsecs_running" : NumberLong(649968360),
"op" : "command",
As you can see, this has been running for 649 seconds, even though I explicitly specified 1000ms.
What is going on here? I've been pulling my hair out for two days and can't figure this out.
I had the same issue and had to update the mongodb driver from 3.1.1 to 3.3.5 and it worked like a charm!

Correct way to connect node.js to a sharded replica cluster in MongoDB using mongoose

So recently we redesigned our MongoDB database cluster to use SSL and replica sets in addition to the sharding we had already implemented. SSL wasn't too difficult to get working, we just needed to split up the private key and certificate and then everything worked fine. However, getting my Node.js app to connect to both mongos instances is proving to be more difficult than I anticipated.
Before we implemented replica sets, we just had two shards, each of them running a mongos router, and in mongoose I gave it the following connection string:
mongodb://Host1:27017,Host2:27017/DatabaseName
Then, in the options object to the connection, I passed in the following:
{mongos: true}
This seems to work just fine. However, after the replica sets are implemented, whenever I pass the mongos option, the application never connects. Our cluster is now setup so that there are 4 MongoDB servers in 2 replica sets of 2 servers each. The master in each replica set is also running a mongos router instance. I assumed I should be able to connect the same way as before, however it never connects. If I create the connection using just 1 shard with no options, the application connects just fine. However, this is not ideal as the whole point is to have redundancy among the router instances. Can anyone offer some insight here?
Here is the output of sh.status():
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("57571fc5bfe098f05bbbe370")
}
shards:
{ "_id" : "rs0", "host" : "rs0/mongodb-2:27018,mongodb-3:27018" }
{ "_id" : "rs1", "host" : "rs1/mongodb-4:27018,mongodb-5:27018" }
active mongoses:
"3.2.7" : 4
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "Demo", "primary" : "rs0", "partitioned" : true }
I was asked to output rs.config(), here it is from the 1st master node:
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "mongodb-2:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongodb-3:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("57571692c490a699f61e3784")
}
}
Alright, so I finally figured it out. I went through the logs on the server and saw that the client was trying to connect and wasn't using SSL so kept getting booted by the server. This was confusing to me because I set SSL in the server options and had the correct keys and cert bundle, as I was able to connect to a single instance just fine. Then I looked through the mongo driver options here. It shows that there are options you need to set for mongos itself regarding SSL. After setting these explicitly I was able to connect.
In summary, this options object allowed me to connect:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
}
}
while this options object did not:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": true
}
I think the server object is probably redundant, but I left it in.

Mongo: index performance and order of query fields

I am puzzled by MongoDB's indexes and their performance. I am building a node.js app to find dishes within a certain area.
My table looks like this and has about 3M dishes:
{
"_id" : ObjectId("560efcf76ea0f2293c60bf6a"),
"name" : "Brunch Pizza",
"loc" : [
-77.063166,
38.906866
],
"rs" : NumberInt(4) }
I have several indexes, but here are the relevant ones:
{ "loc" : "2d" }
{ "loc" : "2d", "name" : 1}
Now, when I query it for either field, the response times are very quick (less than 0.2 second). When I query for both together, it's about 1 or 2 seconds. What am I doing wrong?
sort is always: {rs:-1}
{"loc":{"$within":{"$box":[[-78.0,38.0],[-77.0,39.0]]}}}: 0.173s for 186k documents
{name:/pizza/gi}: 0.112s
but
{"loc":{"$within":{"$box":[[-78.0,38.0],[-77.0,39.0]]}}, name:/pizza/gi}: 2s
{name:/pizza/gi, "loc":{"$within":{"$box":[[-78.0,38.0],[-77.0,39.0]]}}}: 2s
These are the numbers from MongoChef, but when called from node, it's similar times.

updating nested document with mongoDb + nodeJs

I have a structure like this:
{
"_id" : ObjectId("501abaa341021dc3a1d0c70c"),
"name" : "prova",
"idDj" : "1",
"list" : [
{
"id" : 1,
"votes" : 2
},
{
"id" : 2,
"votes" : 4
}
]
}
And I'm trying to increase votes with this query:
session_collection.update(
{'_id':session_collection.db.bson_serializer.ObjectID.createFromHexString(idSession),'list.id':idSong},
{$inc:{'list.$.votes':1}},
{safe: true} ,
callback);
But it doesn't work, there are no problems it just doesn't update anything.
I think it's because the ['] (simple quotation mark) on the 'list.id' and 'list.$.votes' because the same query inside the terminal works perfectly.
Thanks!
I suspect your matching is not working as you expect. The callback will return
function(err, numberofItemsUpdated, wholeUpdateObject)
numberofItemsUpdated should equal 1 if your matching worked. you'll need to check if idSession and idSong are what you think they are.

Mongodb increased db.currentOp() issue

My site using mongodb for the chat application. Mongodb queries are getting timed out so i checked the db.currentOp(). Below is the currentOp() and Mongodb details,
637 active operations
750 inactive operations
Other details about mongodb:
Mongo db is running with sharding
I have two databases
a)First database having, two table only
b)Second database having , 5 tables
My questions are, why the current.Op() count got increased suddenly and what are the causes we have to taken care if currentOp() count is increased. Please help me on this and apologies for my bad English.
Below are the sample output of my currentOp()
MongoDB shell version: 1.8.2
> db.currentOp()
{
"inprog" : [
{
"opid" : "msdata1:234234234",
"active" : true,
"lockType" : "read",
"waitingForLock" : false,
"secs_running" : 43534,
"op" : "getmore",
"ns" : "local.oplog.rs",
"query" : {
},
"client_s" : "70.52.078.123:12345",
"desc" : "conn"
},
{
"opid" : "msdata1:2342323423",
"active" : true,
"lockType" : "read",
"waitingForLock" : false,
"secs_running" : 231231,
"op" : "query",
"ns" : "ichat.chatmemberlist",
"query" : {
"count" : "chatmemberlist",
"query" : {
"Mid" : "23423",
"bmid" : "23423"
}
},
"client_s" : "70.52.078.123:12345",
"desc" : "conn"
},
{
"opid" : "msdata1:2342323423",
"active" : false,
"lockType" : "write",
"waitingForLock" : true,
"op" : "update",
"ns" : "?ichat.useravail",
"query" : {
"Mid" : "23423"
},
"client_s" : "70.512.078234.423:12345",
"desc" : "conn"
},
...
...
...
From the limited amount of info, I can see that your queries are just running a really long time: "secs_running" : 231231, means 231 seconds. It's likely that you don't have enough resources available for the type of queries that you are running. That could be that you don't have enough memory, or perhaps too much queries that are acquiring a lock. If you're not on MongoDB 2.0.x yet, then you might want to upgrade to that too as it has vastly improved locking: http://blog.pythonisito.com/2011/12/mongodbs-write-lock.html
I would advice to check the mongodb.log file to see which queries are being slow, then use explain to figure out whether you've indexes on the columns and then either add indexes, or see how you can re-design your schema if that might look like a better solution.

Resources