Can I Open Multiple Connections to a HTTP Server? - multithreading

I’m writing a small software component in order to download resources from a web Server (IIS).
But it seems like that system's performance is not acceptable. Now I’m planning to increase the number of connection to the web server by spawning multiple threads.
My question is, can I improve performance by using multiple threads? More over dose web server allow me to spawning several simultaneous connections?
Thanks
Upul

All properly configured web servers should be able to handle multiple connections from the same source. This allows, for example, a browser to download two images from a page at once.
Some servers may place an upper limit on the number of concurrent connections it will accept from one client, but this will usually be a high number. Using up to 6 connections is normally safe.
As for whether it will actually improve performance, that depends on your situation. If you have a very fast connection to an internet backbone, and find that the speed you are getting from the remote server is not taking advantage of the speed of your connection, then in many situations multithreading can improve speed. If the speed is already maxing out the speed of your connection to the internet, or the connection of the remote server, then it can't do anything.

Web servers are do allow simultaneous connections. There should not be any problem in opening one's unless the application logic prevents opening multiple connections for same client. To further clarify my point, if you require login before downloading resources and your application does not allow multiple simultaneous logins, then you will get stuck there. There are applications which do not allow lot of connections from same source for security reasons.

Related

Can I use just few servers to get lot of network connections?

I am trying to get as much network connections from one machine as I can, using few machines. I just want to be sure that establishing many connections between servers will give me similar results as having many connections with different server?
Yes, and some issues related to that are called the C10K problem !
However, a connection to localhost is not the same as a distant remote one: the latency and the bandwidth are quite different.
Maybe you want some web server benchmarking ? There are some tools for that!

I'm not sure how to correctly configure my server setup

This is kind of a multi-tiered question in which my end goal is to establish the best way to setup my server which will be hosting a website as well as a service (using Socket.io) for an iOS (and eventually an Android) app. Both the app service and the website are going to be written in node.js as I need high concurrency and scaling for the app server and I figured whilst I'm at it may as well do the website in node because it wouldn't be that much different in terms of performance than something different like Apache (from my understanding).
Also the website has a lower priority than the app service, the app service should receive significantly higher traffic than the website (but in the long run this may change). Money isn't my greatest priority here, but it is a limiting factor, I feel that having a service that has 99.9% uptime (as 100% uptime appears to be virtually impossible in the long run) is more important than saving money at the compromise of having more down time.
Firstly I understand that having one node process per cpu core is the best way to fully utilise a multi-core cpu. I now understand after researching that running more than one per core is inefficient due to the fact that the cpu has to do context switching between the multiple processes. How come then whenever I see code posted on how to use the in-built cluster module in node.js, the master worker creates a number of workers equal to the number of cores because that would mean you would have 9 processes on an 8 core machine (1 master process and 8 worker processes)? Is this because the master process usually is there just to restart worker processes if they crash or end and therefore does so little it doesnt matter that it shares a cpu core with another node process?
If this is the case then, I am planning to have the workers handle providing the app service and have the master worker handle the workers but also host a webpage which would provide statistical information on the server's state and all other relevant information (like number of clients connected, worker restart count, error logs etc). Is this a bad idea? Would it be better to have this webpage running on a separate worker and just leave the master worker to handle the workers?
So overall I wanted to have the following elements; a service to handle the request from the app (my main point of traffic), a website (fairly simple, a couple of pages and a registration form), an SQL database to store user information, a webpage (probably locally hosted on the server machine) which only I can access that hosts information about the server (users connected, worker restarts, server logs, other useful information etc) and apparently nginx would be a good idea where I'm handling multiple node processes accepting connection from the app. After doing research I've also found that it would probably be best to host on a VPS initially. I was thinking at first when the amount of traffic the app service would be receiving will most likely be fairly low, I could run all of those elements on one VPS. Or would it be best to have them running on seperate VPS's except for the website and the server status webpage which I could run on the same one? I guess this way if there is a hardware failure and something goes down, not everything does and I could run 2 instances of the app service on 2 different VPS's so if one goes down the other one is still functioning. Would this just be overkill? I doubt for a while I would need multiple app service instances to support the traffic load but it would help reduce the apparent down time for users.
Maybe this all depends on what I value more and have the time to do? A more complex server setup that costs more and maybe a little unnecessary but guarantees a consistent and reliable service, or a cheaper and simpler setup that may succumb to downtime due to coding errors and server hardware issues.
Also it's worth noting I've never had any real experience with production level servers so in some ways I've jumped in the deep end a little with this. I feel like I've come a long way in the past half a year and feel like I'm getting a fairly good grasp on what I need to do, I could just do with some advice from someone with experience that has an idea with what roadblocks I may come across along the way and whether I'm causing myself unnecessary problems with this kind of setup.
Any advice is greatly appreciated, thanks for taking the time to read my question.

How do you allow a large number of connections from a client in IIS?

I am trying to do some performance testing on an ASP.NET application where I suspect most of the stress is currently on the database. I want to test entire round trips with the system to find the bottle necks, so I am starting a test application to hit my web service that starts up multiple threads to simulate multiple clients. However, IIS (7 on windows 7) seems to have a default limit per client of 5 concurrent connections.
I know that this limit is there for good reasons (like preventing DOS attacks), but is there a way to bump this up for my testing environment?

Concurrent networking in Scala

I have a working prototype of a concurrent Scala program using Actors. I am now trying to fine tune the number of different Actors, etc..
One stage of the processing requires fetching new data via the internet. Of course, there is nothing I can really do to speed that aspect up. However, I figure if I launch a bunch of requests in parallel, I can bring down the total time. The question, therefore, is:
=> Is there a limit on concurrent networking in Scala or on Unix systems (such as max num sockets)? If so, how can I find out what it is.
In Linux, there is a limit on the number of open file descriptors each program can have open. This can be seen using the ulimit -n. There is a system-wide limit in /proc/sys/kernel/file-max.
Another limit is the number of connections that the Linux firewall can track. If you are using the iptables connection tracking firewall this value is in /proc/sys/net/netfilter/nf_conntrack_max.
Another limit is of course TCP/IP itself. You can only have 65534 connections to the same remote host and port because each connection needs a unique combination of (localIP, localPort, remoteIP, remotePort).
Regarding speeding things up via concurrent connections: it isn't as easy as just using more connections.
It depends on where the bottlenecks are. If your local connection is being fully used, adding more connections will only slow things down. If you are connecting to the same remote server and its connection is fully used, more will only slow it down.
Where you can get a benefit is when your local connection is not fully used and you are connecting to multiple remote hosts.
If you look at web browsers, you will see they have limits on how many connections will be made to the same remote server. They also have limits on how many connections will be made in total.

Socket.io: How many concurrent connections can WebSockets handle?

I am wondering if you have any data on concurrent connections to websockets? I am using Socket.io on Node.js server. How many clients can connect to socket and receive data without bringing my server down? 1000? 1000.0000?
Thanks!
This highly depends on your hardware configuration, what exactly are you doing/processing on the server side and if your system is optimized for many concurrent connections. For example on Linux machine by default you would probably first hit maximum number of opened files or other limits (which can be increased) before running into hardware resources exhaustion or similar scalability issues. Key resource may be the amount of RAM which can be allocated by your node.js program to keep concurrent connections opened and ability to receive new ones.
http://blog.caustik.com/2012/08/19/node-js-w1m-concurrent-connections/
Check this blog. We use the same principle. Previously our nodejs server will crash after 100 concurrent connections due to hardware constraints. But after we move to Amazon EC2 it is now highly scalable.

Resources