I am using mosquitto(v-1.5.8) as my broker . I want to connect the broker from browser so I'm using mqtt through websockets. What is the configuration do I include in mosquitto.conf file to get maximum connection or unlimited connection
In mosquitto.conf, there is a parameter named max_connections, its defualt value is set to -1, which means unlimited connections. However this is practically impossible, the maximum number of concurrent connections on MQTT broker should depend on the underlying operating system environment.
For example in Linux/Ubuntu users, maximum number of concurrent connections is determined by number of open files, you can check this by the command ulimit -a to list all the limits, see the "open files" part, the value is usually around 1024.
Related
I am using the timeout mechanism to close my connection socket, which will close inactive connections greater than 15s. But my Linux system seems to only allow me to create 1024 sockets in the same time period.
So I want to ask
(1) Is there a connection limit for the connection socket?
(2) If there is an upper limit, is the timeout mechanism unable to resist a large number of concurrent requests in the same time period?
(3) I am using a dual-core 4g virtual machine. Will improving the configuration increase the number of its creation?
(4) If (2) is correct, what method (or how long) should the server use to close the socket?
See the backlog parameter mentioned here:
All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).
A more thorough explanation of this parameter would be appreciated. Some questions that come to mind after reading that description:
When is a connection 'pending' vs accepted?
How do you max out this backlog limitation?
What happens when the backlog parameter is exceeded, is an error returned?
This parameter will help you throttle connections to your server and, hence, protect it from "too many connections" issues.
Without this parameter, your server will (theoretically speaking and without OS limitations) accepts any number of connections. And since each connection consumes memory from your underlying machine, you will get to an "out of memory" situation when there are too many connections to your server and your server will stop running or, at least, will behaving in unexpected manner.
Receiving too many connections can be either normal (too many valid users trying to use your server) or abnormal (a result of a DOS/DDOS attack). In all cases, it is a good practice to put a limit to the number of connections your server can handle to ensure a high quality of service (connections above the limit will be "polity" declined).
Hopefully, Linux put some system-wide limitations that are tcp_max_syn_backlog and somaxconn. Even if you set the backlog parameter to a high value, Linux will review it to align it with the value set for these two system parameters. You can read more about this feature in this post.
Where to set max connections for nodejs(for using express' get) in windows 10? Is it related to max files(descriptors) setting in linux? Is there a windows version of that setting? Preferably a setting in nodejs so it will be compatible when migrated to unix?
I suspect loadtest module gives error because of this setting when over 2000 concurrent connections to attack my server program that uses express and keeps connections in a queue to be processed later. Loadtest finishes normally for 200 concurrent connections(-c 200 in command line). Also when I don't keep connections in a queue and if operation in get is simple("response.end('hello world')"), it doesn't give error for -c 2000 maybe it finishes a work before other starts so its not 2000 concurrent really, only queued version has 2000 concurrency?
I'm not using http module but I am handling xlmhttprequest sent from clients on serverside express module's get function.
Maybe desktop version windows os wasn't developed with server apps in mind so it doesn't have a setting for max connections, max file accesses and related variables has been hardcoded in it?
To answer one part of your question, for max open file there is _setmaxstdio, see:
https://msdn.microsoft.com/en-us/library/6e3b887c(vs.71).aspx
Maybe you can write a wrapper that changes this and starts your Node program.
As for the general question of max open connections, this is what I found about some older Windows:
http://smallvoid.com/article/winnt-tcpip-max-limit.html
It talks about changing the config of:
[HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \Tcpip \Parameters]
TcpNumConnections = 0x00fffffe (Default = 16,777,214)
[HKEY_LOCAL_MACHINE \System \CurrentControlSet \Services \Tcpip \Parameters]
MaxUserPort = 5000 (Default = 5000, Max = 65534)
See also other related parameters in the above link. I don't know if the names of the config values changed in more recent versions of Windows but HKEY_LOCAL_MACHINE System CurrentControlSet Services Tcpip Parameters is something that I would search for if increasing the open files limit is not enough.
For a comparison, here is how I increase the number of concurrent Node/Express connection in Linux.
Trying to build a TCP server using Spring Integration in which keeps connections may run into thousands at any point in time. Key concerns are regarding
Max no. of concurrent client connections that can be managed as session would be live for a long period of time.
What is advise in case connections exceed limit specified in (1).
Something along the lines of a cluster of servers would be helpful.
There's no mechanism to limit the number of connections allowed. You can, however, limit the workload by using fixed thread pools. You could also use an ApplicationListener to get TcpConnectionOpenEvents and immediately close the socket if your limit is exceeded (perhaps sending some error to the client first).
Of course you can have a cluster, together with some kind of load balancer.
I am running redis-benchmark tool to send N number of requests From server A to B.
This tools generates TCP requests and receives response.
Some how when number requests reach to 51000, it stops and not exceeding above that.
I have tried the same using different machine and I got almost 100000 requests proccessed per second.
What sort of factors can limit these number of requests ??
A major factor would be the number of open file descriptors the process is allowed to create. This would be true for both the server and client side.
http://redis.io/topics/clients and http://redis.io/topics/benchmarks both have the information you should work through to determine where exactly your problem is. Without the details of your setup it is unlikely we can be more specific.
Check your ulimits and your server configuration to ensure you've configured your respective systems to the limits you intend to benchmark to and you'll be able to get more usable data.