I have read that the send() function on Winsock blocks until the ACK from the last packet is recieved. Now I am playing with a server for a turn based role playing game. Everything is handled by one thread (for 64 sockets). A request is recieved, handled and a response written to the socket(s). This process cannot be interrupted.
Is it possible to handle, say 1000 clients (one thread for every 64 sockets) with this method?
Wouldn't it block the whole server if a send() takes too long to complete or the client maliciously does not send the ACK or the connection gets interrupted?
Shall I split the logic of networking and request handling into 2 threads? If so the thread handling the network transfers could still be blocked by a send() or recv().
Or would it be best to use overlapped I/O?
send() blocks only if the socket is running in blocking mode and the socket's outbound buffer fills up with queued data. If you are managing multiple sockets in the same thread, do not use blocking mode. If one receiver does not read data in a timely maner, it can cause all of the connections on that thread to be affected. Use non-blocking mode instead, then send() will report when a socket has entered a state where blocking would occur, then you can use select() to detect when the socket can accept new data again. A better option is to use overlapped I/O or I/O Completion Ports instead. Submit outbound data to the OS and let the OS handle all of the waiting for you, notifying you when the data has eventually been accepted/sent. Do not submit new data for a given socket until you receive that notification. For scalability to a large number of connections, I/O Completion Ports are generally a better choice.
No, it doesn't work like that. From the MSDN documentation on send:
The successful completion of a send function does not indicate that the data was successfully delivered and received to the recipient. This function only indicates the data was successfully sent.
Related
Let's say I create many many threads on one core CPU. Each thread does IO operation, for example it reads data from a database or other microservice.
What happens if I create thousands of threads that read something from a DB?
How this communication works?
I assume that in a thread we send some request to a DB or some HTTP call to other service. After that CPU is used by another thread. How is this communcation handled? Does the OS handle messages for other threads and waits until these threads will be used by CPU to pass them data?
Lets say I make 1000 calls in 1000 of threads and each response will be 1MB of data. Where is this data buffored until correct thread become active? (For example we are spawning tenth thread and already got a response fot the first one)
Or maybe someone could pass some nice articles about this topic?
Every time a thread makes an I/O request the OS (kernel) queues that I/O and puts the thread to sleep (assuming we're talking about a synchronous I/O call).
"Queues that I/O" means setting up some link between the socket through which the I/O is performed and the network card queue, and setting up an internal OS buffer to hold request and response data.
When a response arrives at the network card, the OS adds the data socket's buffer and, typically, wakes up the thread that made the associated I/O request.
Note that while an HTTP or DB query response can be 1 MB, it's usually done over a TCP/IP connection, which usually has a much lower MTU. The TCP/IP implementation will require the server to slice the response into packets and send multiple small packets.
If 1000 responses arrive at the same time, and the hardware cannot handle such a load, each server will have to send its packets more slowly, but the OS will still likely handle all such "streams" of responses in parallel.
I assume that in a thread we send some request to a DB or some HTTP call to other service. After that CPU is used by another thread. How is this communcation handled? Does the OS handle messages for other threads and waits until these threads will be used by CPU to pass them data?
It depends on the exact communication method used. Most commonly, it will be some kind of byte stream connection such as a TCP connection. In that case, the thread typically makes a blocking read operation that causes the kernel to mark that thread as waiting for I/O. It attaches the thread to a data structure associated with the TCP connection and does whatever is needed to make that I/O take effect.
When a response is received, the kernel code notices the thread waiting for activity. It then marks that original thread ready-to-run and the scheduler will eventually schedule it. When it runs, it resumes in the kernel's blocking I/O code, but this time there is data waiting for it, so it returns to user space and resumes execution.
Lets say I make 1000 calls in 1000 of threads and each response will be 1MB of data. Where is this data buffored until correct thread become active? (For example we are spawning tenth thread and already got a response fot the first one)
It depends on exactly what communication method is used. If it's a TCP connection, then there are buffers associated with that connection. If it uses shared memory, then the other process just writes into that shared memory page.
I've got a basic knowledge from here about epoll. I know that epoll can monitor multiple FDs and handle them.
My question is: can a heavy event block the server so I must use multithreading?
For example, the epoll of a server is monitoring 2 sockets A and B. Now A starts to send lot of messages to the server so the server starts to read them. One second later, B starts to send messages too while A is still sending. In this case, Need I create a thread for these read actions? If I don't, does it mean that the server has no chance to get the messages from B until A finishes its sending?
If you can process incoming messages fast enough (no blocking calls, no heavy computations), you don't need a separate thread. Otherwise, you would benefit from going multi-threaded.
In any case, it helps to understand what happens when you have only one thread and you can't process messages fast enough. If you are working with TCP protocol, the machines sending you the data will simply reduce their transmission rate. When using UDP, some incoming packets will get dropped.
I have the following situation:
A daemon that does a privileged operation on data that is kept in memory.
A multithreaded server currently running on about 30 cores handling user requests.
The server (1) would receive queries from (2), process them one by one, and return an answer. Each query to (1) would never block and only take a fraction of a microsecond on (1) to process, so we are guaranteed to get responses back fast unless (1) gets overrun by too much load.
Essentially, I would like to set up a situation where (1) listens to a UNIX domain socket and (2) writes requests and reads responses. However, I would like each thread of (2) to be able to read and write concurrently. My idea is to have one UNIX socket per thread for communication between (1) and (2) have (1) block on epoll_wait on these sockets processing requests one by one. Each thread on (2) would then read and write independently to its socket.
The problem that I see with this approach is that I can't easily dynamically grow the number of threads on (2). Is there a way to accomplish this in a way that is flexible with respect to runtime configuration? I guess one approach would be to have a large number of sockets and a thread on (2) would pick one socket by random, take a mutex on it, write a query and block waiting for a response, then release the mutex once it gets a response back from (1).
Anyone have better ideas?
I would suggest a viable possibility is to go with your own proposal and have each thread create its own socket for communicating with the daemon. You can use streaming (tcp) sockets which can easily solve your problem of adding more threads dynamically:
The daemon listens on a particular port, using socket(), bind() and listen(). The socket being listened to is initially the only thing in its epoll_wait set.
The client threads connect to this port with connect()
The daemon server accepts (with accept()) the incoming connection to create a new socket, which is added to its epoll_wait set with epoll_ctl().
The above procedure can be used to arbitrarily add as many sockets as you need, all with a single epoll_wait loop on the daemon side.
my server uses non-blocking socket multiplexing.
I want to use one thread for receiving incoming connections and handle input and another thread for sending data over sockets. It will look like that:
reader_worker(...)
{
...
select(fd_max+1, &read_set, NULL, NULL, NULL);
...
}
writer_worker(...)
{
...
select(fd_max+1, NULL, &write_set, NULL, NULL);
...
}
Since I don't want to miss incoming connections I can imagine it is better to receive and send in seperate threads. Do I have to lock the socket which might be in read_set as well as in write_set? Is this the correct approach or doesnt this method have any performance improvement?
You're not going to miss any incoming connection if you do it in one thread. Even if you have large amounts of data to send or receive, the latency of a send() or recv() call is determined by the size of local buffers in the network stack rather, so you will be back quickly to accept incoming connections.
Two thing though, where I'm not sure about your code samples:
Put the socket for accepting connections into the same array as the accepted connections for use in a single select() call. Unless you bind to different network interfaces, using multiple threads is not going to make you program run faster.
If you are handling incoming data, this is where a second thread becomes useful. In general, you have resources like you network interface card, CPU, hard disk. It is useful to have one thread per resource, i.e. one to handle network IO (receiving requests, sending responses) and a second one to handle the actual requests. If you have multiple cores, you can also make use of multiple threads for handling requests, provided that is a CPU-bound task. If all that these requests do is serving pictures from a single hard disk, using more than one thread for requests won't give you any additional performance.
Im using TClientSocket or indy's TIdTCPClient (depending on project)
I have a few Threads each processing items, and sometimes need to send data over the connected client socket. (Data Read form the socket is NOT used in the processing threads)
Basically my question is...
Is the possible?
is it "safe"?
or should I
have a client socket per thread or
some kinda of Marshalling/critical sections
delphi-7 indy-9
Multiple threads can read and write to the same socket. Since everytime you accept, it shall extract the first connection on the queue of pending connections, create a new socket with the same socket properties and allocate a new file descriptor for that socket.
So only one thread per accepted connection.
If you are asking if you can do multiple write/read on an accepted connection, you will need locking features, hence loss of parallelism benefits. If you want to thread a long process and then write the result in the socket, use synchronization to write in the correct order.