Too many connections to MongoDb - Node.js with Sails - node.js

I'm developing an application using Node.js and Sails.
I'm going to run like: 20 instances of the same app at the same time, and all of them will use a Local MongoDB to store model data.
My problem started like this: Only the first 7 or 8 launched apps was starting, others were failing because they couldn't connect to the database.
Ok, I went though some searching, and saw that I had to increase the number of connections, but what made me thing something was wrong, is that: Each app launched, is creating about 35 connections!
So, when launching 6 or 8 apps, they were taking about 250 connections!!!
That seems to much, since only one connection per app is enough (I think). Is this 'normal', or is it some problem in Sails Waterline core?

Any solution on this issue?
I have the same issue ( load balanced instances connecting to mongo ) without using sails...
Another issue is that due to "zero downtime deploy" i clone the cluster and then change the DNS so temporarily having double amount of connections.
So on my case i'm also listening to SIGINT and SIGQUIT and closing the connections before the app terminates, so then hopefully "keep alive" connections will die together with the app.
There is tons of people with similar problems around, but i failed to find a spot-on solution /=

Related

how to use all cpu with nodejs

We have a production chat app built in socketio/nodejs.
We use express.
Nodejs is a bit old : 10.21.0
SocketIO in 3.1.1
Our computer is a VM with 4vCPU and 16 GB RAM.
We use pm2 to manage starting node app with env variables.
We are facing an issue when there are about 500 users in chat and when they write. Bandwidth usage is around 250 Mbps in upload (but we have 10G so no issue). Issue begins here, we can see in our logs full of connection/disconnection and pm2 restart app.
In checking in more details, in launching "pm2 monit" we can see that only one processor is used and it is higher than 100% most of the time.
We read few documentation about clustering (cluster + fork). It seems to be interesting but in our case when we tested it, it's like we had few chat apps so for the same "chat room", users are in different workers so it's not OK.
Do you have an idea how we can fix that and use all processor/core ?
We are already thinking of starting with upgrading nodejs?
Thanks
Niko
Since Node.js is always single-threaded (aside from worker threads), upgrading Node won't get you much anywhere (aside from newer Nodes shipping newer V8 engines, which might be faster).
it's like we had few chat apps so for the same "chat room", users are in different workers so it's not OK.
This sounds like you've architected your app to use global variables or in-process state like that for these shared rooms. If you want to use cluster or PM2's multiple process mode, that state will need to live somewhere else, maybe a second Node application or, say, a Redis server.

Node.js Active handles rise suddenly

I have a Parse Server which is a Node.js + express wrapper for a mobile app (about 100 simultaneous users every day), hosted on DigitalOcean. The app server communicates with MongoDB, which is hosted on another droplet of DigitalOcean. I'm using pm2 as a process manager and its monitoring tool, which is web-based. On the same process, we operate LiveQuery, a WebSocket server made by the Parse community as well.
The thing is, I've been having some performance issues with the server. Everything works smoothly, until the Active handles rise up uncontrollably! (see the image below) It's like after one point the server says "I'm done! Now I rest!"
Usually the active handles stay between 30 to 70. The moment I restart the process with pm2 restart everything goes back to normal!
I've been having this issue for quite some time now and I haven’t been able to figure out what’s causing it! Any help will be greatly appreciated!
EDIT: I did a stress test where I created 200 LiveQuery sockets for 1 user, instead of 2 that a user normally has and there was a spike of 300 active handles, for like 5 seconds! The moment the sockets were all created, everything went back to normal!
I usually use restart based on memory usage
pm2 start filename.js --max-memory-restart 160 --exp-backoff-restart-delay=100
pm2 has also built-in cron job or autostart script setup in case the server ever restarts, see https://pm2.keymetrics.io/docs/usage/restart-strategies/
it would be could if pm2 would provide restart options based on active connections or heap memory

How to warm up a Heroku Node.js server?

Heroku reboots servers everyday. After reboot, my node server takes around 20 seconds to load a page for the first time. Is there a way to prevent this?
EDIT: You guys seem to be misunderstanding the situation. In Heroku, even production servers must be restarted daily. This is not the same as a free server sleeping. This question is aimed more at preventing lazy-loading and pre-establishing connection pools to databases.
Old question, but in case others stumble upon it like I did
Use can use Heroku's Preboot feature:
Preboot changes the standard dyno start behavior for web dynos. Instead of stopping the existing set of web dynos before starting the new ones, preboot ensures that the new web dynos are started (and receive traffic) before the existing ones are terminated. This can contribute to zero downtime deployments
You could also combine it with a warmup script like the one described in this Heroku post

Figuring out how many simultaneous connections Heroku can have with socket.io

I have a Node.js app on Heroku that uses socket.io. I got Heroku to work with socket.io using long-polling. Then I recently added the WebSocket module to heroku and got socket.io working with WebSockets.
My question is, how can I measure the maximum number of connections the Heroku instance is able to have simultaneously before it thrashes or decreases in performance.
You'll need to have two things in place:
Some sort of testing client or script that you can ask to fire up an arbitrary number of WebSockets and keep them open for the remainder of the test.
Proper monitoring on your dyno's performance. For this you want to use a monitoring plugin. I like to use Librato.
After that it's just about running your test scenario and tweaking until you're satisfied with your memory and load limits.

IIS Connection Pool interrogation/leak tracking

Per this helpful article I have confirmed I have a connection pool leak in some application on my IIS 6 server running W2k3.
The tough part is that I'm serving 300 websites written by 700 developers from this server in 6 application pools, 50% of which are .NET 1.1 which doesn't even show connections in the CLR Data performance counter. I could watch connections grow on my end if everything were .NET 2.0+, but I'm even out of luck on that slim monitoring tool.
My 300 websites connect to probably 100+ databases spread out between Oracle, SQLServer and outliers, so I cannot watch the connections from the database end either.
Right now my best and only plan is to do a loose binary search for my worst offenders. I will kill application pools and slowly remove applications from them until I find which individual applications result in the most connections dropping when I kill their pool. But since this is a production box and I like continued employment, this could take weeks as a tracing method.
Does anyone know of a way to interrogate the IIS connection pools to learn their origin or owner? Is there an MSMQ trigger I might be able to which I might be able to attach when they are created? Anything silly I'm overlooking?
Kevin
(I'll include the error code to facilitate others finding your answers through search:
Exception: System.InvalidOperationException
Message: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.)
Try starting with this first article from Bill Vaughn.
Todd Denlinger wrote a fantastic class http://www.codeproject.com/KB/database/connectionmonitor.aspx which watches Sql Server connections and reports on ones that have not been properly disposed within a period of time. Wire it into your site, and it will let you know when there is a leak.

Resources