maximum concurrent connections in browsers - internet-explorer-6

When we say the maximum concurrent connections allowed in browsers, are we saying per browser or per tab? e.g in IE 6, the limit is 2. If I open two IE 6 windows, both of the web page contains a persistent iframe connection, does it mean that if i open the third IE 6 window, the iframe persistent connection can't be connected?
However, I tried and it still can connect to the same server. So does this mean that the concurrent connection limit is 2 per windows in IE 6?

IE 6 isn't Firefox. Every window is its own process. So yes, I guess you get the configured number of concurrent connections (does not need to be 2, as you can trivially change that in the registry) per window, i.e. per process.

There are concurrent connection limits per host, this is dependent on the both HTTP protocol (1.1 or 1.0) used and the client bandwidth available (broadband or dial-up), see the Concurrent Connectons section in AJAX - Connectivity Enchancements article over at MSDN for details.

Related

Is IIS running out of connections?

Our IIS website has its Maximum Concurrent Connections set to 4294967295. Our Web API application is logging all the requests it serves to Application Insights and the two do not appear to match up. A call which appears to get served quickly in Insights does not appear to complete quickly in IIS's logs.
What could cause this and is this an indication that IIS is running out of connections, even if the maximum is set ridiculously high?
Phrasing this another way (after reading #zakima's comment): What should I be looking for to identify requests which are getting delayed in IIS before or after they hit the application itself?
Maximum concurrent connections defaults to 4294967295, which is a staggering number. But it does not mean that the site can have the ability to execute 4294967295 concurrent connections.
Assuming that 4294967295 concurrent connections come at the same time, IIS does not immediately start 4294967295 threads to process, because this is unrealistic. For the processing of connections, IIS has the "Maximum concurrent worker threads" limit. From some sources, this number is related to the operating system. If IIS can only start 10 worker threads in the first time to process, then the other 4294967285 must queue.
In another word, 4294967295 means the maximum amount of allowed by default concurrent connections from http.sys module to the site. Then these request will hit each module of IIS and hit application at last.
If you want to check the real max concurrent connections of IIS, please refer to this article to use Performance Monitor.
Regard to how to monitor the request getting delayed in IIS before or after, I suggest you use failed request tracing. Here is the sample of failed request tracing log of my asp.net application.

max limit of outgoing call to other services from nodeJS

i'm still relatively new to NodeJS, but I'm interesting in making > 100k simultaneous connections to the API of several stock exchanges.
In the node docs it mentions that the number of simultaneous outgoing calls/connections is infinity - is that accurate?
This info on the changelog is more related to the fact that before the maximum number of concurrent sockets node could handle was set to 5 (it was changeable of course) but now it is set to Infinity by default.
This has nothing to do with the capacity of connexions an application can handle, that will be limited by the network's characteristics, the OS specifications, the available resources on the server, etc.
In short, you are looking at something (the blog post) that has nothing to do with your question.

WAS8.5 - webcontainer poolsize settings for 500 concurrent users

I am working on a project which is developed using plain JSF with webservice backend and our target runtime is WAS8.5. It’s an internet application and being used by more that 500 concurrent users.
We are performing NFT testing with 150% load ie. 750+ concurrent users. From Introscope, I could see web container thread pool size is set as min&max value of 50. We feel web container thread pool size is blocking the concurrent loads we are trying to achieve. We are expecting experts suggestion for ThreadPoolModule and webcontainer poolsize settings in our WAS servers. (FYI: We have two WAS servers in our cluster)

Weird Tomcat outage, possibly related to maxConnections

In my company we experienced a serious problem today: our production server went down. Most people accessing our software via a browser were unable to get a connection, however people who had already been using the software were able to continue using it. Even our hot standby server was unable to communicate with the production server, which it does using HTTP, not even going out to the broader internet. The whole time the server was accessible via ping and ssh, and in fact was quite underloaded - it's normally running at 5% CPU load and it was even lower at this time. We do almost no disk i/o.
A few days after the problem started we have a new variation: port 443 (HTTPS) is responding but port 80 stopped responding. The server load is very low. Immediately after restarting tomcat, port 80 started responding again.
We're using tomcat7, with maxThreads="200", and using maxConnections=10000. We serve all data out of main memory, so each HTTP request completes very quickly, but we have a large number of users doing very simple interactions (this is high school subject selection). But it seems very unlikely we would have 10,000 users all with their browser open on our page at the same time.
My question has several parts:
Is it likely that the "maxConnections" parameter is the cause of our woes?
Is there any reason not to set "maxConnections" to a ridiculously high value e.g. 100,000? (i.e. what's the cost of doing so?)
Does tomcat output a warning message anywhere once it hits the "maxConnections" message? (We didn't notice anything).
Is it possible there's an OS limit we're hitting? We're using CentOS 6.4 (Linux) and "ulimit -f" says "unlimited". (Do firewalls understand the concept of Tcp/Ip connections? Could there be a limit elsewhere?)
What happens when tomcat hits the "maxConnections" limit? Does it try to close down some inactive connections? If not, why not? I don't like the idea that our server can be held to ransom by people having their browsers on it, sending the keep-alive's to keep the connection open.
But the main question is, "How do we fix our server?"
More info as requested by Stefan and Sharpy:
Our clients communicate directly with this server
TCP connections were in some cases immediately refused and in other cases timed out
The problem is evident even connecting my browser to the server within the network, or with the hot standby server - also in the same network - unable to do database replication messages which normally happens over HTTP
IPTables - yes, IPTables6 - I don't think so. Anyway, there's nothing between my browser and the server when I test after noticing the problem.
More info:
It really looked like we had solved the problem when we realised we were using the default Tomcat7 setting of BIO, which has one thread per connection, and we had maxThreads=200. In fact 'netstat -an' showed about 297 connections, which matches 200 + queue of 100. So we changed this to NIO and restarted tomcat. Unfortunately the same problem occurred the following day. It's possible we misconfigured the server.xml.
The server.xml and extract from catalina.out is available here:
https://www.dropbox.com/sh/sxgd0fbzyvuldy7/AACZWoBKXNKfXjsSmkgkVgW_a?dl=0
More info:
I did a load test. I'm able to create 500 connections from my development laptop, and do an HTTP GET 3 times on each, without any problem. Unless my load test is invalid (the Java class is also in the above link).
It's hard to tell for sure without hands-on debugging but one of the first things I would check would be the file descriptor limit (that's ulimit -n). TCP connections consume file descriptors, and depending on which implementation is in use, nio connections that do polling using SelectableChannel may eat several file descriptors per open socket.
To check if this is the cause:
Find Tomcat PIDs using ps
Check the ulimit the process runs with: cat /proc/<PID>/limits | fgrep 'open files'
Check how many descriptors are actually in use: ls /proc/<PID>/fd | wc -l
If the number of used descriptors is significantly lower than the limit, something else is the cause of your problem. But if it is equal or very close to the limit, it's this limit which is causing issues. In this case you should increase the limit in /etc/security/limits.conf for the user with whose account Tomcat is running and restart the process from a newly opened shell, check using /proc/<PID>/limits if the new limit is actually used, and see if Tomcat's behavior is improved.
While I don't have a direct answer to solve your problem, I'd like to offer my methods to find what's wrong.
Intuitively there are 3 assumptions:
If your clients hold their connections and never release, it is quite possible your server hits the max connection limit even there is no communications.
The non-responding state can also be reached via various ways such as bugs in the server-side code.
The hardware conditions should not be ignored.
To locate the cause of this problem, you'd better try to replay the scenario in a testing environment. Perform more comprehensive tests and record more detailed logs, including but not limited:
Unit tests, esp. logic blocks using transactions, threading and synchronizations.
Stress-oriented tests. Try to simulate all the user behaviors you can come up with and their combinations and test them in a massive batch mode. (ref)
More specified Logging. Trace client behaviors and analysis what happened exactly before the server stopped responding.
Replace a server machine and see if it will still happen.
The short answer:
Use the NIO connector instead of the default BIO connector
Set "maxConnections" to something suitable e.g. 10,000
Encourage users to use HTTPS so that intermediate proxy servers can't turn 100 page requests into 100 tcp connections.
Check for threads hanging due to deadlock problems, e.g. with a stack dump (kill -3)
(If applicable and if you're not already doing this, write your client app to use the one connection for multiple page requests).
The long answer:
We were using the BIO connector instead of NIO connector. The difference between the two is that BIO is "one thread per connection" and NIO is "one thread can service many connections". So increasing "maxConnections" was irrelevant if we didn't also increase "maxThreads", which we didn't, because we didn't understand the BIO/NIO difference.
To change it to NIO, put this in the element in server.xml:
protocol="org.apache.coyote.http11.Http11NioProtocol"
From what I've read, there's no benefit to using BIO so I don't know why it's the default. We were only using it because it was the default and we assumed the default settings were reasonable and we didn't want to become experts in tomcat tuning to the extent that we now have.
HOWEVER: Even after making this change, we had a similar occurrence: on the same day, HTTPS became unresponsive even while HTTP was working, and then a little later the opposite occurred. Which was a bit depressing. We checked in 'catalina.out' that in fact the NIO connector was being used, and it was. So we began a long period of analysing 'netstat' and wireshark. We noticed some periods of high spikes in the number of connections - in one case up to 900 connections when the baseline was around 70. These spikes occurred when we synchronised our databases between the main production server and the "appliances" we install at each customer site (schools). The more we did the synchronisation, the more we caused outages, which caused us to do even more synchronisations in a downward spiral.
What seems to be happening is that the NSW Education Department proxy server splits our database synchronisation traffic into multiple connections so that 1000 page requests become 1000 connections, and furthermore they are not closed properly until the TCP 4 minute timeout. The proxy server was only able to do this because we were using HTTP. The reason they do this is presumably load balancing - they thought by splitting the page requests across their 4 servers, they'd get better load balancing. When we switched to HTTPS, they are unable to do this and are forced to use just one connection. So that particular problem is eliminated - we no longer see a burst in the number of connections.
People have suggested increasing "maxThreads". In fact this would have improved things but this is not the 'proper' solution - we had the default of 200, but at any given time, hardly any of these were doing anything, in fact hardly any of these were even allocated to page requests.
I think you need to debug the application using Apache JMeter for number of connection and use Jconsole or Zabbix to look for heap space or thread dump for tomcat server.
Nio Connector of Apache tomcat can have maximum connections of 10000 but I don't think thats a good idea to provide that much connection to one instance of tomcat better way to do this is to run multiple instance of tomcat.
In my view best way for Production server: To Run Apache http server in front and point your tomcat instance to that http server using AJP connector.
Hope this helps.
Are you absolutely sure you're not hitting the maxThreads limit? Have you tried changing it?
These days browsers limit simultaneous connections to a max of 4 per hostname/ip, so if you have 50 simultaneous browsers, you could easily hit that limit. Although hopefully your webapp responds quickly enough to handle this. Long polling has become popular these days (until websockets are more prevalent), so you may have 200 long polls.
Another cause could be if you use HTTP[S] for app-to-app communication (that is, no browser involved). Sometimes app writers are sloppy and create new connections for performing multiple tasks in parallel, causing TCP and HTTP overhead. Double check that you are not getting an inflood of requests. Log files can usually help you on this, or you can use wireshark to count the number of HTTP requests or HTTP[S] connections. If possible, modify your API to handle multiple API calls in one HTTP request.
Related to the last one, if you have many HTTP/1.1 requests going across one connection, and intermediate proxy may be splitting them into multiple connections for load balancing purposes. Sounds crazy I know, but I've seen it happen.
Lastly, some crawl bots ignore the crawl delay set in robots.txt. Again, log files and/or wireshark can help you determine this.
Overall, run more experiments with more changes. maxThreads, https, etc. before jumping to conclusions with maxConnections.

WebSphere - preventing DoS - limiting web container threads per IP (client)

WebSphere provides WebContainer max threads size - that allows it to control number of web requests handled at the same time.
I am wondering if WebSphere allows similiar things per incoming IP address. That is, does WebSphere allows you to configure per each different incoming IPs not more than , for example, 10, WebContainer threads.
Thank you.
There isn't a capability to restrict the number of threads per client (based on IP or other characteristics).
DoS is better handled by other systems upstream.

Resources