socket.io stops working after 500 concurrent connections - node.js

I have a javascript browser socket.io client simulator which creates concurrent connections (
io.connect(host, {'force new connection': true});) to my socket.io node.js server but when the amount of websocket connections is around 545 it suddenly stops working and the client tries to connect via xhr-polling or any other fallback protocol (without any success of course).
I tried to tweak my server even I couldn't encounter any hardware problems till now. My ulimit -n is set to 40960 and ulimit -s is 8192.
sysctl -a | grep file output:
fs.file-max = 800000
fs.file-nr = 7520 0 800000
I tried to set the http.globalAgent.maxSockets to 40000 but without any effect. Even if I'm trying to connect from different machines (located in different domains, no proxys between) with like only 100 concurrent connections the socket.io server will stop serving when 545 clients are reached.
I read a lot of blogs and stuff like:
Realtime Nodejs Stresstest Story
Caustiks Blog with 250k+ connections
Does anyone has another idea what I could have missed to look at? I am using the latest socket.io and node.js I could get. My server is running on ubuntu.

Related

Too many persistent TCP connections

We have around 500 clients connected to a Linux RedHat ES 5 server.
Recently it occurs, that the server still holds connections to clients which have been rebootet without stopping the application, which communicates with the server, before.
A netstat on the client always returns only one established connection to the server. After a client reboot, communication runs over a new established connection. On server side sometimes the old connection is closed, sometimes it stays in state established so that we have a growing number of established connections to each client.
Because various client operating systems are affected, I think that this isn't an application issue, but one of the Linux OS of the server.
I tried to tune the values of
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 9
without success.
Also I tried to set the maximum file handles value from 1024 to 2048, but connections still never get closed, not even after the TCP keepalive time expires.
Does somebody have an idea what could cause that strange behaviour?
Those settings allow you to configure the default keep-alive behavior (when keep-alives are enabled). However, they do not make keep-alives automatic. The feature must still be explicitly enabled on a per-socket basis via the SO_KEEPALIVE socket option.
See http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/ for details. From section 3:
Remember that keepalive support, even if configured in the kernel, is not the default behavior in Linux. Programs must request keepalive control for their sockets using the setsockopt interface.

Websocket (node.js) connection limit, clients are getting disconnected after reaching 400-450 connections

I have a big problem with socket.io connection limit. If the number of connections is more than 400-450 connected clients (by browsers) users are getting disconnected. I increased soft and hard limits for tcp but it didn't help me.
The problem is only for browsers. When I tried to connect by socket-io-client module from other node.js server I reached 5000 connected clients.
Its very big problem for me and totally blocked me. Please help.
Update
I have tried with standard Websocket library (ws module with node.js) and problem was similar. I can reach only 456 connected clients.
Update 2
I devided connected clients between a few instances of server. Every group of clients were connecting by other port. Unfortunately this change didn't help me. Sum of connected users was the same like before.
Solved (2018)
There were not enough open ports for a Linux user which run the pm2 manager ("pm2" or "pm" username).
You may be hitting a limit in your operating system. There are security limits in the number of concurrent files open, take a look at this thread.
https://github.com/socketio/socket.io/issues/1393
Update:
I wanted to expand this answer because I was answering from mobile before. Each new connection that gets established is going to open a new file descriptor under your node process. Of course, each connection is going to use some portion of RAM. You would most likely run into the FD limit first before running out of RAM (but that depends on your server).
Check your FD limits: https://rtcamp.com/tutorials/linux/increase-open-files-limit/
And lastly, I suspect your single client concurrency was not using the correct flags to force new connections. If you want to test concurrent connections from one client, you need to set a flag on the webserver:
var socket = io.connect('http://localhost:3000', {'force new connection': true});

apr_socket_recv: connection reset by peer (104) nodejs Ubuntu 14.10 x64 8GB RAM 4 Core (VPS)

I am working on a project in node.js. The project is about location(GPS) tracking. The ultimate target for my project is to server 1M concurrent requests. What i have done are,
Created a server in node.js listing on port 8000
A html document with google map to locate user positions/GPS locations
A single socket connection between server and html document to pass location information
An API to get user location from client devices ( it can be a mobile app )
Once a server receives user location via the API mentioned it will emit that information to client (HTML document) via socket.
Its working well and good.
Problem
I am using apachebench for load test my server. When i increase the concurrency benchmarking breaks frequently with the error
apr_socket_recv: Connection reset by peer (104)
how can i solve this, Whats the actual cause of this problem.
Note: if i run the same server in my local windows machine it serving 20K requests successfully.
I have changed the
ulimit -n 9999999
and the soft and hard file openings limit to 1000000
neither solves my problem.
please help me to understands the problem clearly. How i increase the concurrency to 1M. is it possible with some more hardware/network tunings?
Edit:
I am using socketCluster on the server with the no of worker equivalent to the no of Core/CPU (i.e) 4 workers in my case
The CPU usage with the htop command in server terminal is 45%
Memory usage was around 4GB / 8GB & swap space not used
The ab command i used to load the server was
ab -n 20000 -c 20000 "http://IP_ADDRESS:8000/API_INFO"

socket.io max connection test on multicore machine

To answer my own question, it was a client issue, not a server one. For some unknown reason, my mac osx could not make over ~7.8k connections. Having ubuntu machine as a client solved the problem.
[Question]
I'm trying to estimate the maximum number of connections my server can keep. So I wrote a simple socket.io server and client test code. You can see it here. : gist
Above gist do very simple job. Server accepts all incoming socket requests, and periodically print out number of established connections, and cpu, memory usage. Client tries to connect to a given socket.io server with a certain number and does nothing but keeping connections.
When I ran this test with one server (ubuntu machine) and one client (from my mac osx), roughly 7800 connections were successfully made and it started to drop connections. So next, I ran more servers on different cpu cores, and ran the test again. What I expected is that more connections could be made (in total sum) because major bottleneck would be a CPU power. But instead what I saw was that how many cores I utilized, the total number of connections this server could keep is around 7800 connections. It's hard to understand why my server behaves like this. Can anyone give me the reason behind this behavior or point me out what I am missing?
Number of connections made before dropping any connection.
1 server : 7800
3 servers : 2549, 2299, 2979 (each)
4 servers : 1904, 1913, 1969, 1949 (each)
Server-side command
taskset -c [cpu_num] node --stack-size=99999 server.js -p [port_num]
Client-side command
node client.js -h http://serveraddress:port -b 10 -n 500
b=10, n=500 means that client should see 10 connections established before trying another 10 connections, until 10*500 connections are made.
Package versions
socket.io, socket.io-client : 0.9.16
express : 3.4.8
CPU is rarely the bottleneck in these types of situations. It is more likely the maximum number of TCP connections allowed by the operating system, or a RAM limitation.

Nodejs max concurrent connections limited to 1012? (probably only on my machine xD)

So, I've been trying to test out my server code, but client sockets catch 'error' when 1012 connections have been established. Client simulator keeps trying 'til it's tried to connect as many times as I've told it to (obviously). Though, as stated, the server is unwilling to serve more than 1012 connections.
I'm running both client simulator & server on the same computer (might be dumb, but shouldn't it work anyway?).
(Running on socket.io)
To increase the limit of open connection/files in Linux:
ulimit -n 2048
Here is more info regarding ulimit

Resources