How to send big data via UDP socket? - multithreading

Now I am building an application to send big data from client to server via UDP. I have some questions are:
Should I use one thread to send data or multi-threads to send data?
If I should use multi-threads to send data, I will use one socket for all threads or one socket per one thread?
Thanks,

Should I use one thread to send data or multi-threads to send data?
Either way can work, so it's mostly a matter of personal preference. If it was me, I would use a single thread rather than multiple threads, because multiple threads are a lot harder to implement correctly, and in this case they won't buy you any additional performance, since your throughput bottleneck is almost certainly going to be either your hard disk or your network card, not the speed of your CPU core(s).
If I should use multi-threads to send data, I will use one socket for all threads or one socket per one thread?
Again, either way will work (for UDP), but if it was me, I would use one socket per thread, only because then you don't have to worry so much about race conditions during process-setup and process-shutdown (i.e. each thread simply creates and destroys its own separate/private socket, so there's no worrying about who does what to the socket when)

Related

Websockets: listen multiple connections simultaneously?

I am working on a project which goal is to receive and store real time data from financial exchanges, using websockets. I have some very general questions about the technology.
Suppose that I have two websocket connections open, receiving real time data from two different servers. How do I make sure not to miss any messages? I have learned a bit of asynchronous programming (python asyncio) but it does not seem to solve the problem: when I listen to one connection, I cannot listen to the other one at the same time, right?
I can think of two solutions: the first one would require that the servers use a buffer system to send their data, but I do not think this is the case (Binance, Bitfinex...). The second solution I see is to listen each websocket using a different core. If my laptop has 8 cores I can listen to 8 connections and be sure not to miss any messages. I guess I can then scale up by using a cloud service.
Is that correct or am I missing something? Many thanks.
when I listen to one connection, I cannot listen to the other one at the same time, right?
Wrong.
When using an evented programming design, you will be using an IO "reactor" that adds IO related events to the event loop.
This allows your code to react to events from a number of connections.
It's true that the code reacts to the events in sequence, but as long as your code doesn't "block", these events could be handled swiftly and efficiently.
Blocking code should be avoided and big / complicated tasks should be fragmented into a number of "events". There should be no point at which your code is "blocking" (waiting) on an IO read or write.
This will allow your code to handle all the connections without significant delays.
...the first one would require that the servers use a buffer system to send their data...
Many evented frameworks use an internal buffer that streams to the IO when "ready" events are raised. For example, look up the drained event in node.js (or the on_ready in facil.io).
This is a convenience feature rather than a requirement.
The event loop might as well add an "on ready" event and assume your code will handle buffering after partial write calls return EAGAIN / EWOULDBLOCK.
The second solution I see is to listen each websocket using a different core.
No need. A single thread on a single core with an evented design should support thousands (and tens of thousands) of concurrent clients with reasonable loads (per-client load is a significant performance factor).
Attaching TCP/IP connections to a specific core can (sometimes) improve performance, but this is a many-to-one relationship. If we had to dedicate a CPU core per connection than server prices would shoot through the roof.

epoll: must I use multi-threading

I've got a basic knowledge from here about epoll. I know that epoll can monitor multiple FDs and handle them.
My question is: can a heavy event block the server so I must use multithreading?
For example, the epoll of a server is monitoring 2 sockets A and B. Now A starts to send lot of messages to the server so the server starts to read them. One second later, B starts to send messages too while A is still sending. In this case, Need I create a thread for these read actions? If I don't, does it mean that the server has no chance to get the messages from B until A finishes its sending?
If you can process incoming messages fast enough (no blocking calls, no heavy computations), you don't need a separate thread. Otherwise, you would benefit from going multi-threaded.
In any case, it helps to understand what happens when you have only one thread and you can't process messages fast enough. If you are working with TCP protocol, the machines sending you the data will simply reduce their transmission rate. When using UDP, some incoming packets will get dropped.

Multi threaded Linux Socket programming design

I am trying to write a server program which supports one client till now and over the few days i was trying to develop it, I concluded i needed threads. The reason for such a decision was since I take input from a wifi socket and later process it and finally write to a file, the processing time is slow and hence i needed a input thread -> circular buffer -> output thread pattern with producer consumer model which is quite common in network programming.
Now, The situation becomes complicated, as I need to manage client disconnection and re connection. I thought of using pthread_exit() and cleaning up all the semaphores and then re initializing them each time the single client re connects.
My question is that is this a efficient approach i.e. everytime killing the threads and semaphores and re creating them. Are there any better solutions.
Thanks.
My question is that is this a efficient approach i.e. everytime killing the threads and semaphores and re creating them. Are there any better solutions.
Learn how to use non-blocking sockets and an event loop. Or use a library that provides TCP sessions for you using non-blocking sockets under the hood. Such as boost::asio.
Learn how to use multi-threading without polluting your code with any synchronization primitives by using message passing to communicate between threads, not shared state. The event loop library you use for non-blocking I/O should also provide means for cross-thread message passing.
Some comments and suggestions.
1-In TCP detecting that the other side has silently disconnected it very difficult if not impossible. A client could disconnect sending a RST TCP message to the server or sending a FIN message, this is the good case. Sometimes the client can disconnect without notice (crash, cable disconnection, etc).
One suggestion here is that you consider the way client and server will communicate. For example, you can use function “select” to set a timeout for receiving a message from client and detect a silent client.
Additionally, depending on the programming language and operating system you may need to handle broken pipe (SIGPIPE) signal (in Linux, with C/C++), for a server trying to send a message through a connection closed by the client.
2-Regarding semaphores, you shouldn’t need to clean semaphores in any especial way when a client disconnect. By applying common good practices of locking and unlocking mutexes should be enough. Also with resources like file descriptors, you need to release them before ending the thread either by returning from the thread start function or with pthread_exit. Maybe I didn’t understand this part of the question.
3-Regarding threads: if you work with multiple threads to optimum is to have a pool of pre-created consumer/worker threads that will check the circular buffer to consume the next available connection. Creating and destroying threads is costly for the operating system.
Threads are resource consuming and you may exhaust operating system resources if you need to create 1,000 threads for example.
Another alternative, is to have only one consumer thread that manages all connections (sockets) asynchronously: a) Each connection has its own state. b) The main thread goes through all connections and use function “select” to detect when connection reads or a writes are ready. 3)Use of non-blocking sockets but this is not essential because from select you know which sockets are ready and will not block.
You can use functions select, poll, epoll.
One link about select and non-blocking sockets: Using select() for non-blocking sockets
Other link with an example: http://linux.die.net/man/2/select

Threading and scaling model for TCP server with epoll

I've read the C10K doc as well as many related papers on scaling up a socket server. All roads point to the following:
Avoid the classic mistake of "thread per connection".
Prefer epoll over select.
Likewise, legacy async io mechanism in unix may be hard to use.
My simple TCP server just listens for client connections on a listen socket on a dedicated port. Upon receiving a new connection, parses the request, and sends a response back. Then gracefully closes the socket.
I think I have a good handle on how to scale this up on a single thread using epoll. Just one loop that calls epoll_wait for the listen socket as well as for the existing client connections. Upon return, the code will handle new creating new client connections as well as managing state of existing connections depending on which socket just got signaled. And perhaps some logic to manage connection timeouts, graceful closing of sockets, and efficient resource allocation for each connection. Seems straightforward enough.
But what if I want to scale this to take advantage of multiple threads and multiple cpu cores? The core idea that springs to mind is this:
One dedicated thread for listening for incoming connections on the TCP listen socket. Then a set of N threads (or thread pool) to handle all the active concurrent client connections. Then invent some thread safe way in which the listen thread will "dispatch" the new connection (socket) to one of the available worker threads. (ala IOCP in Windows). The worker thread will use an epoll loop on all the connections it is handling to do what the single threaded approach would do.
Am I on the right track? Or is there a standard design pattern for doing a TCP server with epoll on multiple threads?
Suggestions on how the listen thread would dispatch a new connection to the thread pool?
Firstly, note that it's C*10K*. Don't concern yourself if you're less than about 100 (on a typical system). Even then it depends on what your sockets are doing.
Yes, but keep in mind that epoll manipulation requires system calls, and their cost may or may not be more expensive than the cost of managing a few fd_sets yourself. The same goes for poll. At low counts its cheaper to be doing the processing in user space each iteration.
Asynchronous IO is very painful when you're not constrained to just a few sockets that you can juggle as required. Most people cope by using event loops, but this fragments and inverts your program flow. It also usually requires making use of large, unwieldy frameworks for this purpose since a reliable and fast event loop is not easy to get right.
The first question is, do you need this? If you're handily coping with the existing traffic by spawning off threads to handle each incoming request, then keep doing it this way. The code will be simpler for it, and all your libraries will play nicely.
As I mentioned above, juggling simultaneous requests can be complex. If you want to do this in a single loop, you'll also need to make guarantees about CPU starvation when generating your responses.
The dispatch model you proposed is the typical first step solution if your responses are expensive to generate. You can either fork or use threads. The cost of forking or generating a thread should not be a consideration in selecting a pooling mechanism: rather you should use such a mechanism to limit or order the load placed on the system.
Batching sockets onto multiple epoll loops is excessive. Use multiple processes if you're this desperate. Note that it's possible to accept on a socket from multiple threads and processes.
I would guess you are on the right track. But I also think details depend upon the particular situation (bandwidh, request patterns, indifidual request processing, etc.). I think you should try, and benchmark carefully.

UNIX socket magic. Recommended for high performance application?

I'm looking using to transfer an accept()ed socket between processes using sendmsg(). In short, I'm trying to build a simple load balancer that can deal with a large number of connections without having to buffer the stream data.
Is this a good idea when dealing with a large number (let's say hundreds) of concurrent TCP connections? If it matters, my system is Gentoo Linux
You can share the file descriptor as per the previous answer here.
Personally, I've always implemented servers using pre-fork. The parent sets up the listening socket, spawns (pre-forks) children, and each child does a blocking accept. I used pipes for parent <-> child communication.
Until someone does a benchmark and establishes how "hard" it is to send a file descriptor, this remains speculation (someone might pop up: "Hey, sending the descriptor like that is dirt-cheap"). But here goes.
You will (likely, read above) be better off if you just use threads. You can have the following workflow:
Start a pool of threads that just wait around for work. Alternatively you can just spawn a new thread when a request arrives (it's cheaper than you think)
Use epoll(7) to wait for traffic (wait for connections + interesting traffic)
When interesting traffic arrives you can just dispatch a "job" to one of the threads.
Now, this does circumvent the whole descriptor sending part. So what's the catch ? The catch is that if one of the threads crashes, the whole process crashes. So it is up to you to benchmark and decide what's best for your server.
Personally I would do it the way I outlined it above. Another point: if the workers are children of the process doing the accept, sending the descriptor is unnecessary.

Resources