I am using the timeout mechanism to close my connection socket, which will close inactive connections greater than 15s. But my Linux system seems to only allow me to create 1024 sockets in the same time period.
So I want to ask
(1) Is there a connection limit for the connection socket?
(2) If there is an upper limit, is the timeout mechanism unable to resist a large number of concurrent requests in the same time period?
(3) I am using a dual-core 4g virtual machine. Will improving the configuration increase the number of its creation?
(4) If (2) is correct, what method (or how long) should the server use to close the socket?
Related
I've been reading this article on Bluetooth 5 & BLE maximum throughput. It provides data on maximum throughput across different devices and configurations. As far as I've understood, these measurements are defined by the connection between two devices and their respective data rates.
When establishing connections to more than one device, do these data rates apply to each connection independently? Or is the data rate shared between all of the connections?
For example: If I have a device with a maximum throughput of 1000kbps and connect it to two peripherals, will both connections have a throughput of 1000kbps? Or will it be split into two connections with 500kbps?
All Bluetooth chips I'm aware of only have one radio and one antenna. That means the connections are timeslotted. So if your connections use the 1Mbit/s PHY then the total throughput won't exceed 1Mbit/s.
How much each connection gets heavily depends on how the scheduler is implemented. If two connections have the same connection interval, a scheduler usually schedules a newly established connection to be allocated right after the first connection's connection events, which might lead to performance where the first connection can only send one or two packets per connection event and the second connection gets all remaining radio time.
I am using mosquitto(v-1.5.8) as my broker . I want to connect the broker from browser so I'm using mqtt through websockets. What is the configuration do I include in mosquitto.conf file to get maximum connection or unlimited connection
In mosquitto.conf, there is a parameter named max_connections, its defualt value is set to -1, which means unlimited connections. However this is practically impossible, the maximum number of concurrent connections on MQTT broker should depend on the underlying operating system environment.
For example in Linux/Ubuntu users, maximum number of concurrent connections is determined by number of open files, you can check this by the command ulimit -a to list all the limits, see the "open files" part, the value is usually around 1024.
Mongodb docs suggests to reduce tcp keepalive time for better performance:
If you experience socket errors between clients and servers or between members of a sharded cluster or replica set that do not have other reasonable causes, check the TCP keepalive value (for example, the tcp_keepalive_time value on Linux systems). A common keepalive period is 7200 seconds (2 hours); however, different distributions and macOS may have different settings.
However it does not explain why this will help, how it improves performance. From my (limited)understanding, connections created by mongo shards and replicas will have their own keep alive time, which might be way shorter than linux global keep-alive values. so Mongo might break the connection as par it's config and creating new connection should ideally not take too much time.
How will it improve performance by reducing linux tcp keep alive setting?
I thank the shorter keepalive setting on DB side or client side will keep less total connections but with higher percentage of active connections between server-client or server-server, also less connections (pool) will use less resource on server and client side.
I read that Redis is Single Thread.
Using jedis client (java) we can configure pool connections, like as:
spring.redis.jedis.pool.max-active=8 # Maximum number of connections that can be allocated by the pool at a given time. Use a negative value for no limit.
spring.redis.jedis.pool.max-idle=8 # Maximum number of "idle" connections in the pool. Use a negative value to indicate an unlimited number of idle connections.
spring.redis.jedis.pool.max-wait=-1ms # Maximum amount of time a connection allocation should block before throwing an exception when the pool is exhausted. Use a negative value to block indefinitely.
spring.redis.jedis.pool.min-idle=0 # Target for the minimum number of idle connections to maintain in the pool. This setting only has an effect if it is positive.
I know that pool connection is important to get a connect already opened, therefore not spending time to connect again.
Imagine that 8 "client request" uses all available pools, so 8 connections will be used, these clients do a "GET" command in Redis.
Redis will be process one thread per time? Each thread needs wait for other to finish since Redis is Single Thread? In this case when 1 "GET" it is processing other 7 are in Redis queue?
How max-active impacts Redis performance if Redis is Single Thread?
Only main loop of Redis is single thread. It uses client buffers in separate threads to speed up processing. Most of the time you spend getting data from Redis is networking time. They help with that. This means unless you are storing really big objects, you will get almost linear throughput increase.
Here I measured different setups for Lettuce (another client). I expect 'pooled' setup to be very similar to what you will get with Jedis: Redis is single thread. Then why should I use lettuce?
Trying to build a TCP server using Spring Integration in which keeps connections may run into thousands at any point in time. Key concerns are regarding
Max no. of concurrent client connections that can be managed as session would be live for a long period of time.
What is advise in case connections exceed limit specified in (1).
Something along the lines of a cluster of servers would be helpful.
There's no mechanism to limit the number of connections allowed. You can, however, limit the workload by using fixed thread pools. You could also use an ApplicationListener to get TcpConnectionOpenEvents and immediately close the socket if your limit is exceeded (perhaps sending some error to the client first).
Of course you can have a cluster, together with some kind of load balancer.