HTTP download and multiple threads - multithreading

This may be a duplicate but I have not seen this being fully answered.
Does HTTP download throughput increase when using threads?
My thinking is that when the TCP stack on the server is waiting for a ack from the receiver before sending the next chunk of data, another thread is sending out a request for data which is then serviced, leading to an increase in throughput.
Is this correct?

Yes, that is pretty much correct. Threading HTTP requests would increase throughput, up until the maximum number of connections was met on the server, and then this increase would plateau. The performance increase would be limited to both the server and the client computers threading abilities, of course.

It's correct only at the startup, during the transfer TCP has a dynamic window of data that can be sent without receiving an ACK.
So while the data transfer is going on, in most situations every chunk of data that can be sent is sent, resulting in maximum throughput.
When you use multiple threads you reduce the dead time in the TCP handshaking.
It can also be useful if you have to download files froms different servers or if the server limits the bandwidth per connection.

Related

Can clients with slow connection break down blocking-socket-based server?

By definition of blocking sockets, all calls to send() or recv() are blocking until the whole networking operation is finished. This can take some time especially when using tcp and talking to client with slow connection. This is of course solved by introducing threads and thread pools. But what happens if all threads are blocked by some slow client? For example your server wants to serve 10 000+ clients with 100 threads sending data to all users every second. That means each thread would have to call send() 100 times every second. What happens if at some point 100 clients are connected with connections so slow that one call to send()/recv() takes 5 seconds to complete for them(or possibly attacker who does it on purpose). In that case all 100 threads are blocking and everyone else waits. How is this generally solved? Adding more threads to threadpool is probably not a solution since there can always be more slow clients and going for some really high number of threads would introduce even more problems with context switching resource consumption etc.
Can clients with slow connection break down blocking-socket-based server?
Yes, they can. And it does consume resources on the server side. And if too much of this happens, you can end up with a form of "denial of service".
Note that this is worst if you use blocking I/O on the server side because you are tying down a thread while the response is being sent. But it is still a problem with non-blocking I/O. In the latter case, you consume server side sockets, port numbers, and memory to buffer the responses waiting to be sent.
If you want to guard your server against the effects of slow clients, it needs to implement a timeout on sending responses. If the server finds that it is taking too long to write a response ... for whatever reason ... it should simply close the socket.
Typical web servers do this by default.
Finally, as David notes, using non-blocking I/O will make your server more scalable. You can handle more simultaneous requests with less resources. But there are still limits to how much a single server can scale. Beyond a certain point you need a way to spread the request load over multiple servers.

How many long-living concurrent TCP sessions are consideer reasonable?

I am developing a TCP server in Linux which initially can handle thousands of concurrent clients, which are intendeed to be long living. However, after starting to implement some functionallity, I made a thread pool for that calls which are blocking and should be done apart, like database or disk access.
After some tests, under high load requesting "many" asynchronous functions my server starts to lag due to many tasks being enqueued, as they arrive faster than they can be processed. These tasks are solved in the nanoseconds, but there are thounsands. I do understand this is totally normal.
I could of course grow behind a load balancer or buying better servers with more cores, however, in practice and as standard in the industry, how many concurrent long-lived TCP sessions are consideer a "good" number in such a server like this one I'm describing? How can I say that the number of concurrent connections I got is "good enough"?
Unfortunately there's no a magic number to answer your question but I have some considerations for you to find your number:
First of all, every operational system has its own max number of
simultaneous connections because the port numbers are finite. So
check if you're not trespassing this number, else every new
connection will be refused by your server.
In order to identify how many simultaneous connections are okay to you, you must to establish a max time of response for your service.
Keep in mind that even having simultaneous connections, multicore CPUs etc... the response will come out by the same network and get bottle necked. Thus I advice you to do a load and a stress test over your architecture in order to find your acceptable latency limit.
TL;DR: There's no a magic answer, you should do a load and a stress test to find it.

Can I use SO_REUSEPORT to distribute a single UDP flow to multiple receiver threads?

My Linux application needs to receive a single UDP flow with modestly-sized packets (~1 KB) at a rate on the order of ~600,000 packets per second. My current implementation is naive: it has a single thread that simply calls recv() repeatedly, placing the received data in a queue to be processed by another thread. Therefore, the receiver thread is only in charge of pulling in the packets.
In some initial testing that I've done, I'm only able to receive between 200,000-300,000 packets per second before the thread reaches full utilization of its CPU core. This obviously isn't good enough to meet the goal of ~600,000 packets per second.
Ideally, I would find some way of distributing the packet reception load across multiple threads. In looking for a solution to the problem, I came across the SO_REUSEPORT socket option, which allows multiple TCP/UDP threads to be bound to the same IP/port combination. At first, this seemed to be exactly what I wanted.
However, the article also points out this detail:
Incoming connections and datagrams are distributed to the server sockets using a hash based on the 4-tuple of the connection—that is, the peer IP address and port plus the local IP address and port. This means, for example, that if a client uses the same socket to send a series of datagrams to the server port, then those datagrams will all be directed to the same receiving server (as long as it continues to exist). This eases the task of conducting stateful conversations between the client and server.
Therefore, if I only have a single UDP flow, the above hashing implementation would yield all of the packets being directed to the same receiver thread, thwarting my attempt at parallelizing the work. Therefore, the question is: is there a way to receive a single flow of UDP packets from multiple threads, using SO_REUSEPORT or some other mechanism?
Note that my application can handle reordering of packets; the protocol that the datagrams are formatted with contains sequencing information that I can use to reorder them properly afterward.
If you didn't find the solution for last 3 years take a look at SO_ATTACH_REUSEPORT_CBPF. We had exactly the same issue and we solved it by attaching simple BPF program which distributes datagrams randomly mod n.

How to temporarily buffer incoming network traffic for latency-sensitive HFT application?

We are running a Java-based trading application, and there are certain periods where we want to prioritize outgoing network traffic as much as possible for about 10 ms. Is there a way to temporarily buffer all incoming network traffic during a short time period, either on the network card or via a process or buffer on our Redhat Linux box?
The rationale behind this is that the incoming network traffic spikes during this same period, and the application processing this traffic is stealing CPU cycles from the process we are trying to prioritize. We do not have fine-grained control over the application treating the incoming network traffic.
We're on a 1 Gbps connection so a buffer of about 1 MB should be sufficient. We would prefer not dropping the incoming traffic and requesting retransmission as this would increase load on our network during quite busy periods.
Possible using Qos on the router, or using trickle to control your bandwidth by a sample configuration of :
/etc/trickled.conf.
see example in url.
I am not sure whether I understand your problem correctly. Your concern is sometimes you have priority to deal with output network traffic and at this time the incoming traffic will build up and finally might cause package drop or retransmission which you don't want. Therefore, you want to buffer your incoming traffic.
If my understanding is correct and your are using TCP, try to make your tcp buffer bigger.
http://kaivanov.blogspot.com/2010/09/linux-tcp-tuning.html and then Use netstat to check whether your change is effective.
Adrian, have you tried setting the priority of your outgoing communication process to be higher than that of the process receiving the incoming data? Using the nice command this can be achieved. Note that in Unix/Linux the lower the number the higher the priority.
Otherwise I am not sure this is possible without having a direct tie in between the two applications that are sending / receiving, allowing you to effectively ignore the incoming connections that are ready to read from until any data you have is sent out.

Using Fleck Websocket for 10k simultaneous connections

I'm implementing a websocket-secure (wss://) service for an online game where all users will be connected to the service as long they are playing the game, this will use a high number of simultaneous connections, although the traffic won't be a big problem, as the service is used for chat, storage and notifications... not for real-time data synchronization.
I wanted to use Alchemy-Websockets, but it doesn't support TLS (wss://), so I have to look for another service like Fleck (or other).
Alchemy has been tested with high number of simultaneous connections, but I didn't find similar tests for Fleck, so I need to get some real info from users of fleck.
I know that Fleck is non-blocking and uses Async calls, but I need some real info, cuz it might be abusing threads, garbage collector, or any other aspect that won't be visible to lower number of connections.
I will use c# for the client as well, so I don't need neither hybiXX compatibility, nor fallback, I just need scalability and TLS support.
I finally added Mono support to WebSocketListener.
Check here how to run WebSocketListener in Mono.
10K connections is not little thing. WebSocketListener is asynchronous and it scales well. I have done tests with 10K connections and it should be fine.
My tests shows that WebSocketListener is almost as fast and scalable as the Microsoft one, and performs better than Fleck, Alchemy and others.
I made a test on a Windows machine with Core2Duo e8400 processor and 4 GB of ram.
The results were not encouraging as it started delaying handshakes after it reached ~1000 connections, i.e. it would take about one minute to accept a new connection.
These results were improved when i used XSockets as it reached 8000 simultaneous connections before the same thing happened.
I tried to test on a Linux VPS with Mono, but i don't have enough experience with Linux administration, and a few system settings related to TCP, etc. needed to change in order to allow high number of concurrent connections, so i could only reach ~1000 on the default settings, after that he app crashed (both Fleck test and XSocket test).
On the other hand, I tested node.js, and it seemed simpler to manage very high number of connections, as node didn't crash when reached the limits of tcp.
All the tests where echo test, the servers send the same message back to the client who sent the message and one random other connected client, and each connected client sends a random ~30 chars text message to the server on a random interval between 0 and 30 seconds.
I know my tests are not generic enough and i encourage anyone to have their own tests instead, but i just wanted to share my experience.
When we decided to try Fleck, we have implemented a wrapper for Fleck server and implemented a JavaScript client API so that we can send back acknowledgment messages back to the server. We wanted to test the performance of the server - message delivery time, percentage of lost messages etc. The results were pretty impressive for us and currently we are using Fleck in our production environment.
We have 4000 - 5000 concurrent connections during peak hours. On average 40 messages are sent per second. Acknowledged message ratio (acknowledged messages / total sent messages) never drops below 0.994. Average round-trip for messages is around 150 miliseconds (duration between server sending the message and receiving its ack). Finally, we did not have any memory related problems due to Fleck server after its heavy usage.

Resources