epoll with edge triggering, one shot and multithreading - multithreading

This is a question regarding this answer: https://stackoverflow.com/a/14241095/2332808 (would comment it but newly created accounts apparently can't, sorry for the noise. Ressources on epollet/multithreading are hard to find...)
It suggests using epoll as the following:
epoll_ctl() to activate notifications (and reactivate if EPOLLONESHOT is used).
system input: read()/recv()/accept() in a loop until error EAGAIN.
epoll_wait() to receive notifications.
But, assuming there are multiple threads in epoll_wait() on the same epollfd, wouldn't this risk having another thread being woken up on the same fd if it receives more data before you're done reading (e.g. ending up with two threads processing the same fd)
Even if you turn things around and read() till EAGAIN, epoll_ctl() and then call read() again to check there's still nothing (to avoid the race where you'd receive something between the last read and the epoll_ctl())...
BUT there is still no guarantee that you wouldn't have actually received something after the epoll_ctl() and would have both the last read() check and another thread woken up working on the same fd again...
I guess having a lock per fd would be an acceptable solution, but is that the "approved" use of epoll in edge-triggering mode with multiple threads polling on the same epollfd ?

Yes, you still need to do proper locking to guard against those cases you describe - and using a lock per fd is the most sensible approach to do that.

Related

Best practice for waiting for events on multiple threads in Linux (like WaitForMultipleObjects)

In Windows there is the API WaitForMultipleObjects which will, if one event is registered in many threads, only wake one thread if the event occurs. I now have to port an application that uses this in its threadpool and I am looking for the best practive to do this in Linux.
I am aware of epoll which can wait for fds (which i can create with pipe), but waiting on one FD in multiple threads may wake every thread on event when only one is needed.
What would be the best practice to implement this behaviour on Linux? I really dont want to split up an event to have as many FDs as there are worker threads, as this may hit the FD limit on some systems as I have many events (which all would be split up).
What I thought about is create 1 master thread that will delegate work to an available worker (or queue the task if all workers are working), but that would mean that I have one additional context switch (and thus giving up computation time) as the master will wake up and then wake up another worker. I would do this if there is no other possibility to cleanly implement this. Unfortunately I cannot get rid of the current architecture so I need to get around this.
Is there any API that would be applicable for this kind of problem?
epoll() is the correct solution, although you could consider using eventfd() file descriptors rather than pipe() file descriptors for the event signalling. See this text from the epoll(7) man page:
If multiple threads (or processes, if child processes have inherited
the epoll file descriptor across fork(2)) are blocked in
epoll_wait(2) waiting on the same the same epoll file descriptor
and a file descriptor in the interest list that is marked for
edge-triggered (EPOLLET) notification becomes ready, just one of the
threads (or processes) is awoken from epoll_wait(2). This provides
a useful optimization for avoiding "thundering herd" wake-ups in some
scenarios.
So to get this single-wakeup behaviour, you have to be calling epoll_wait() in each thread on the same epoll descriptor, and you have to have registered your event-notifying file descriptors in the epoll set as edge-triggered.

How can I block a single thread for 3 different events (semaphore, pthread condition, and blocking socket recv)?

I have a multi-threaded system in which a main thread has to wait in blocking state for one of the following 4 events to happen:
inter-process semaphore (sem_wait())
pthread condition (pthread_cond_wait())
recv() from socket
timeout expiring
Ideally I'd like a mechanism to unblock the main thread when any of the above occurs, something like a ppoll() with suitable timeout parameter. Non-blocking and polling is out of the picture due to the impact on the CPU usage, while having separate threads blocking on different events is not ideal due to the increased latency (one thread unblocking from one of the events should eventually wake up the main one).
The code will be almost exclusively compiled under Linux with gcc toolchain, if that helps, but some portability would be good, if at all possible.
Thanks in advance for any suggestion
The mechanisms for waiting on multiple types of objects on Unix-like systems are not that great. In general, the idea is to, wherever possible, use file descriptors for IPC rather than multiple different IPC mechanisms.
From your comment, it sounds like you can edit or change the condition variable, but not the code that signals the semaphore. So what I'd recommend is something like the following.
Change the condition variable to either a pipe (for more portability) or an eventfd(2) object (Linux-specific). The notifying thread writes to the pipe whenever it wants to signal the main thread. This will allow you to select(2) or poll(2) or whatever in the main thread on both that pipe and the socket.
Because you're stuck with the semaphore, I think the best option would be to create another thread, whose sole purpose is to wait for the semaphore using sem_wait(), and then write to another pipe or eventfd(2) object when it is notified by whatever process is doing sem_post(). In the main thread, just add this other file descriptor to your select(2) set.
So you'll have three descriptors: one for the socket, one taking the place of the condition variable, and one which is written to when the semaphore is incremented. You can then wait on all three using your favorite I/O multiplexing method, and include directly whatever timeout you'd like.

SetEvent ResetEvent WaitForMultipleObjectsEx - Race condition?

I am not able to understand the PulseEvent or race condition. But to avoid it I am trying to SetEvent instead, and ResetEvent every time before WaitForMultipleObjectsEx.
This is my flow:
Thread One - Uses CreateEvent to create an auto reseting event, I then spawn and tell Thread TWO about it.
Thread One - Tell thread TWO to run.
Thread TWO will do ResetEvent on event and then immediately start WaitForMultipleObjectsEx on the event and some other stuff for file watching. If WaitForMultipleObjectsEx returns, and it is not due to the event, then restart the loop immediately. If WaitForMultipleObjectsEx returns, due to event going to signaled, then do not restart loop.
So now imagine this case please:
Thread TWO - loop is running
Thread One - needs to add a path, so it does (1) SetEvent, and then (2) sends another message to thread 2 to add a path, and then (3) sends message to thread 2 to restart loop.
The messages of add path and restart loop will not come in to Thread TWO unless I stop the loop in TWO, which is done by the SetEvent. Thread TWO will see it was stoped due to the event, and so it wont restart the loop. So it will now get the message to add path, so it will add path, then restart loop.
Thread One - needs to stop the thread, so it does (1) SetEvent and then (2) waits for message thread 2, when it gets that message it will terminate the thread.
Will this avoid race condition?
Thank you
Suppose the loop needs to be interrupted twice in succession. You're imagining a sequence of events something like this, on thread ONE and thread TWO:
Thread ONE realizes that the first interruption is complete.
Thread ONE sends a message telling TWO to restart the wait loop.
Thread TWO reads the message "restart the wait loop".
Thread TWO resets the event.
Thread TWO starts waiting.
Thread ONE now realizes that another interruption is needed.
Thread ONE sets the event to ask for another interruption.
Thread ONE sends message related to the second interruption.
Thread TWO stops the loop, receives the message about the second interruption.
But since you don't have any control over the timing between the two threads, it might instead happen like this:
Thread ONE realizes that the first interruption is complete.
Thread ONE sends a message telling TWO to restart the wait loop.
Thread ONE now realizes that another interruption is needed.
Thread ONE sets the event to ask for another interruption.
Thread TWO reads the message "restart the wait loop".
Thread TWO resets the event.
Thread TWO starts waiting.
Thread ONE sends a message about the second interruption, but TWO isn't listening!
Even if the message passing mechanism is synchronous, so that ONE won't continue until TWO has read the message, it could happen this way:
Thread ONE realizes that the first interruption is complete.
Thread ONE sends a message telling TWO to restart the wait loop.
Thread TWO reads the message "restart the wait loop", but is then swapped out.
Thread ONE now realizes that another interruption is needed.
Thread ONE sets the event to ask for another interruption.
Thread TWO resets the event.
Thread TWO starts waiting.
Thread ONE sends a message about the second interruption, but TWO isn't listening!
(Obviously, a similar thing can happen if you use PulseEvent.)
One quick solution would be to use a second event for TWO to signal ONE at the appropriate point, i.e., after resetting the main event but before waiting on it, but that seems somewhat inelegant and also doesn't generalize very well. If you can guarantee that there will never be two interruptions in close-enough succession, you might simply choose to ignore the race condition, but note that it is difficult to reason about this because there is no theoretical limit to how long it might take for thread TWO to resume running after being swapped out.
The various alternatives depend on how the messages are being passed between the threads and any other constraints. [If you can provide more information about your current implementation I'll update my answer accordingly.]
This is an overview of some of the more obvious options.
If the message-passing mechanism is synchronous (if thread ONE waits for thread TWO to receive the message before proceeding) then using a single auto-reset event should just work. Thread ONE won't set the event until after thread TWO has received the restart-loop message. If the event is already set when thread TWO starts waiting, that just means that there were two interruptions in immediate succession; TWO will never stall waiting for a message that isn't coming. [This potential stall is the only reason I can think of why you might not want to use an auto-reset event. If you have another concern, please edit your question to provide more details.]
If is OK for sending a message to be non-blocking, and you aren't already locked in to a particular solution, any of these options would probably be sensible:
User mode APCs (the QueueUserAPC function) provide a message-passing mechanism that automatically interrupts alertable waits.
You could implement a simple queue (protected by a critical section) which uses an event to indicate whether there is a message pending or not. In this case you can safely use a manual-reset event provided that you only manipulate it when you hold the same critical section that protects the queue.
You could use an auto-reset event in combination with any sort of thread-safe queue, provided only that the queue allows you to test for emptiness without blocking. The idea here is that thread ONE would always insert the message into the queue before setting the event, and if thread TWO sees that the event is set but it turns out that the queue is empty, the event is ignored. If efficiency is a concern, you might even be able to find a suitable lock-free queue implementation. (I don't recommend attempting that yourself.)
(All of those mechanisms could also be made synchronous by using a second event object.)
I wouldn't recommend the following approaches, but if you happen to already be using one of these for messaging this is how you can make it work:
If you're using named pipes for messaging, you could use asynchronous I/O in thread TWO. Thread TWO would use an auto-reset event internally, you specify the event handle when you issue the I/O call and Windows sets it when I/O arrives. From the point of view of thread ONE, there's only a single operation. From the point of view of thread TWO, if the event is set, a message is definitely available. (I believe this is somewhat similar to your original approach, you just have to issue the I/O call in advance rather than afterwards.)
If you're using a window queue for messaging, the MsgWaitForMultipleObjectsEx() function allows you to wait for a window message and other events simultaneously.
PS:
The other problem with PulseEvent, the one mentioned in the documentation, is that this can happen:
Thread TWO starts waiting.
Thread TWO is preempted by Windows and all user code on the thread stops running.
Thread ONE pulses the event.
Thread TWO is restarted by Windows, and the wait is resumed.
Thread ONE sends a message, but TWO isn't listening.
(Personally I'm a bit disappointed that the kernel doesn't deal with this situation; I would have thought that it would be possible for it to set a flag saying that the wait shouldn't be resumed. But I can only assume that there is a good reason why this is impractical.)
The Auto-Reset Events
Would you please try to change the flow so there is just SetEvent and WaitForMultipleObjectsEx with auto-reset events? You may create more events if you need. For example, each thread will have its own pair of events: one to get notifications and another to report about its state changes - you define the scheme that best suits your needs.
Since there will be auto-reset events, there would be neither ResetEvent nor PulseEvent.
If you will be able to change the logic of the algorithm flow this way - the program will become clear, reliable, and straightforward.
I advise this because this is how our applications work since the times of Windows NT 3.51 – we manage to do everything we need with just SetEvent and WaitForMultipleObjects (without the Ex suffix).
As for the PulseEvent, as you know, it is very unreliable, even though it exists from the very first version of Windows NT - 3.1 - maybe it was reliable then, but not now.
To create the auto-reset events, use the bManualReset argument of the CreateEvent API function (if this parameter is TRUE, the function creates a manual-reset event object, which requires the use of the ResetEvent function to set the event state to non-signaled -- this is not what you need). If this parameter is FALSE, the function creates an auto-reset event object. The system will automatically reset the event state to non-signaled after a single waiting thread has been released, i.e., after WaitForMultipleObjects or WaitForSingleObject or other wait functions that explicitly wait for this event to become signaled.
These auto-reset events are very reliable and easy to use.
Let me make a few additional notes on the PulseEvent. Even Microsoft has admitted that PulseEvent is unreliable and should not be used -- see https://msdn.microsoft.com/en-us/library/windows/desktop/ms684914(v=vs.85).aspx -- because only those threads will be notified that are in the "wait" state when PulseEvent is called. If they are in any other state, they will not be notified, and you may never know for sure what the thread state is, and, even if you are responsible for the program flow, the state can be changed by the operating system contrary to your program logic. A thread waiting on a synchronization object can be momentarily removed from the wait state by a kernel-mode Asynchronous Procedure Call (APC) and returned to the wait state after the APC is complete. If the call to PulseEvent occurs during the time when the thread has been removed from the wait state, the thread will not be released because PulseEvent releases only those threads that are waiting at the moment it is called.
You can find out more about the kernel-mode APC at the following links:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms681951(v=vs.85).aspx
http://www.drdobbs.com/inside-nts-asynchronous-procedure-call/184416590
http://www.osronline.com/article.cfm?id=75
The Manual-Reset Events
The Manual-Reset events are not that bad. :-) You can reliably use them when you need to notify multiple instances of a global state change that occurs only once, for example, application exit. The auto-reset events can only be used to notify one thread (because if more threads are waiting simultaneously for an auto-reset event and you set the event, one random thread will exist and will reset the event, but the behavior of the remaining threads that also wait for the event, will be undefined). From the Microsoft documentation, we may assume that one and only one thread will exit while others would definitely not exit, but this is not very explicitly articulated in the documentation. Anyway, we must take the following quote into consideration: "Do not assume a first-in, first-out (FIFO) order. External events such as kernel-mode APCs can change the wait order" Source - https://msdn.microsoft.com/en-us/library/windows/desktop/ms682655(v=vs.85).aspx
So, when you need to notify all the threads quickly – just set the manual-reset event to the signaled state, rather than signaling each auto-reset event for each thread. Once you have signaled the manual-reset event, do not call ResetEvent since then. The drawback of this solution is that the threads need to have an additional event handle passed in the array of their WaitForMultipleObjects. The array size is limited, although, to MAXIMUM_WAIT_OBJECTS, which is 64, we never reached close to this limit in practice.
You can get more ideas about auto-reset events and manual reset events from https://www.codeproject.com/Articles/39040/Auto-and-Manual-Reset-Events-Revisited

Safe way to handle closure of sockets managed by epoll

Using epoll_wait to manage multiple connections using multiple threads, there is a risk trying to release custom data associated with a closed socket.
Consider the following scenario, where T is the custom data :
Data is received,
Because of 1, thread A deblocks from epoll_wait and processes the event (access T)
At same time, another thread B, wants to close the connection
Thread B can't assume that T can be safely deleted, eventhough the call to close will immediatly remove the socket from the epoll.
I had the following standard idea :
Maintain a variable within T that gets incremented each time a call to write/read returns EAGAIN, and gets decremented each time the socket is ready.
When close is called, wait for that variable to go down to zero before deleting T.
The issue I experienced is that if close is called, epoll_wait does not return an indication of a cancellation of previous calls to arm the socket.
Anybody had this same problem ? How did you managed to overcome it ?
At least three possible ways here:
Do not use threads, simple and clean, and usually works.
Have a dedicated thread do all file descriptor polling and publish events to a pool of worker threads that do actual I/O and processing.
Have one epoll(7) instance per thread, so threads manage non-intersecting sets of descriptors, with the exception of maybe the listening socket(s) to get these sets populated, and some control mechanism like eventfd(2), or self-pipe(2) to be able to shutdown the whole rig cleanly.
Hope this helps.
After many research, I found this recent and remarkable article :
http://lwn.net/Articles/520012/
Basically it acknowledge the issue I am describing and speaks about a possible future patch to Linux kernel that allows to extend the epoll API in a way that solves the issue.
The extension bring a new command called : EPOLL_CTL_DISABLE.
When it is issued, and by means of return value, the calling thread will know if some other thread has just been deblocked from epoll_wait upon same socket.
This can help know the safe moment of closure and release of custom data.

epoll performance

Can anyone please help me to answer the questions about epoll_wait.
Is it overkill to use many threads that call epoll_wait on the same fds set to serve at about 100K active sockets? or will it just be enough to create only 1 thread to perform epoll_wait?
How many threads will wake up from epoll_wait when for example only one socket is ready to read data? i mean, can there be situation when 2 or more threads will wake up from epoll_wait but will have same fds in resulted events?
What is the best way to organize threads in server that works with many active clients (e.g. 50K+). The best way i think is: 1 I/O Worker Thread which perfroms epoll_wait and i/o operations. + Many Data processing threads which will process the data received from I/O worker thread (can take a long time, such as any game logic) and compose new data for I/O worker thread to send to client. Am I right in this approach, or can anyone help me to find out the best way to organize this?
Thanks in advance, Valentin
When using epoll, you want to size your thread total to the number of physical CPU cores (or hyperthread dispatch units) which you want to use for processing. Using only one thread for work means that at most one core will be active at a time.
It depends on the mode of the epoll file descriptor. Events can be "edge triggered", meaning that they only happen once atomically, or "level triggered" meaning that any caller gets an event if there is space in the buffer.
Not enough information to say. I'd suggest not having special purpose threads at all, for simplicity, and simply handling each event's "command" in the thread in which it is received. But obviously that depends on the nature of your application.
I recommend you this reading from 2006: http://www.kegel.com/c10k.html
Actually this is a wrong use case of epoll.
You must absolutely not share the epoll fd between threads. Otherwise you have the possibility that one thread read part of incoming data on one fd and another thread too on the same fd without any way to know which part of the data was before the other.
Just call epoll_create in each and every thread that calls epoll_wait. Otherwise the I/O is broken.

Resources