synchronising between threads handling read and write in sockets with infinite loops - multithreading

I am doing a task to receive user input in one thread (thread 1) and another thread(thread 2) to send cyclic CAN messages on a linux environment. The flow of the program is as follows,
In main thread, created a socket using socket(AF_INET,SOCK_STREAM,0) for server-client TCP connection
Bind was successfully done.
Using accept(), server-client established
Upon 3) , two threads are created, thread1 and thread2
thread 1 for user input from recv(), http://man7.org/linux/man-pages/man2/recv.2.html
thread 2 created socket for CAN communication using socket and uses timer to send the CAN messages with specified interval, continuously.
In thread2, I have used timer_create to send cyclic messages, which uses write function to write the CAN messages in the CAN socket.
http://man7.org/linux/man-pages/man2/timer_create.2.html
The problem is I have recv in an infinite loop in thread1 waiting for user input and the timer function in thread2 is also continuous. This makes recv to exit with signal interruption EINTR from thread1. I have tried the following,
Created a condition wait using pthread_cond_wait in thread 1, to wait for status update from thread 2. This does not work, as once the timer has started, it does not check for condition variable any more.
Does anyone has similar experience with using socket in two infinite loops for receiving and sending in two threads and controlling it on a condition variable. Any suggestion on how to handle them?

Related

Can epoll's EPOLLEXCLUSIVE trigger multiple reads on the same socket simultaneously?

In my understanding EPOLLEXCLUSIVE will only wake one thread per event, but if more events happen it will retrigger.
Suppose I'm reading from multiple sockets using multiple threads, all using epoll_wait, with EPOLLEXCLUSIVE.
Data comes on socket S ==> thread A wakes
More data comes on Socket S ==> thread B wakes
If #2 happens before A is finished, it could be the case that A and B are reading from the socket simultaneously?
Is my understanding correct?

select(), posix message queue and multithreading in Linux

I am facing a problem about the message queue:
I have used mq_timedreceive() to get message queue in abs_timeout time.
But this function is affected by system time (CLOCK_REALTIME). I mean that when system time change, the abs_timeout (absolute time) is not right any more.
To fix this problem, I realize that it should change to CLOCK_MONOTOIC clock.
But in linux, there is no way (I seached and found QNX support this mechanism).
Finally, I combine select() and mq_timereceive with NO_WAIT.
+ select(): using relative time so it's not affected by system time changing.
After timeout, I will get message queue with mq_timereceive(), of course absolute time = 0;
But my problem is:
If system have many thread that are waiting the same message queue (by using select()),
If a message is sent to message queue, all waiting thread are woken up and running. So it's wrong.
Maybe a thread (not first waiting thread) wake up first and get this message.
My expected is only first waiting thread should woken up and it will get message, and others still block.
Please help.
Looks like you have several questions in one:
Waiting on a message queue with a timeout that is not affected by clock adjustments. In Linux the following APIs support clock (CLOCK_REALTIME, CLOCK_MONOTONIC, etc.) selection: timerfd_create and timer_create. One way to integrate these with mq_timedreceive is to let timer_create fire a signal that interrupts mq_timedreceive.
Integrating waiting on a POSIX message queue with select. The most straight-forward way would be to use mq_notify to make it deliver a signal when a new message is available, thus making select call return -1 and errno set to EINTR.
Fair queuing, so that the first waiter gets the next message. With POSIX message queues it may be possible if the waiting threads are blocked in mq_receive. Otherwise the next available message is delivered to a thread that calls mq_receive first.
For message passing between threads of the same process another approach can be to have a pipe act as a queue of message pointers. That is, a producer thread creates a message and writes a pointer to it into the pipe (i.e. no need to serialize the entire message because the message recipient is in the same process and has access to the process address space). Any consumer thread can wait on the pipe using select and then read the pointers to messages. However, if multiple threads are waiting on the same pipe, they all get woken up but only one of the threads will read the message pointer off the pipe.

Generating EPOLLRDHUP event on tcp socket

How do I trigger an EPOLLRDHUP event on my tcp socket using the other thread programatically,
I have added the epoll instance with EPOLLRDHUP event and tried to generate the event, but it modifies the event on that FD , do not trigger it,
I want my first thread which is continuously waiting for event with epoll_wait(), should receive the event from EPOLLRDHUP, as soon as the other thread triggers it, I am not able to get how to trigger that event, I tried using write system call in another thread but that also do not trigger the event on socket FD I guess, poll should come out of blocking loop is my requirement, Please help, Thanks.
You can't generate epoll events on same file descriptor from another thread, EPOLLRDHUP would be generated based on something happening at the other end of the TCP connection.
If you have 1 thread waiting on epoll_wait() and you want to wake that thread up from another thread, you should create a pipe(), have your epoll_wait wait for read events on the reading side of the pipe in addition to any TCP sockets. When you want to wake up your thread, you write a byte on the writing side of the pipe.
(an eventfd could be used instead of the pipe to achieve the same too)

pthread_sigmask not working properly with aio callback threads

My application is sometimes terminating from SIGIO or SIGUSR1 signals even though I have blocked these signals.
My main thread starts off with blocking SIGIO and SIGUSR1, then makes 2 AIO read operations. These operations use threads to get notification about operation status. The notify functions (invoked as detached threads) start another AIO operation (they manipulate the data that has been read and start writing it back to the file) and notification is handled by sending signal (one operation uses SIGIO, the other uses SIGUSR1) to this process. I am receiving these signals synchronously by calling sigwait in the main thread. Unfortunately, sometimes my program crashes, being stopped by SIGUSR1 or SIGIO signal (which should be blocked by a sigmask).
One possible solution is to set SIG_IGN handlers for them but this doesn't solve the problem. Their handlers shouldn't be invoked, rather should they be retrieved from pending signals by sigwait in the next iteration of the main program loop.
I have no idea which thread handles this signal in this manner. Maybe it's the init who receives this signal? Or some shell thread? I have no idea.
I'd hazard a guess that the signal is being received by one of your AIO callback threads, or by the very thread which generates the signal. (Prove me wrong and I'll delete this answer.)
Unfortunately per the standard, "[t]he signal mask of [a SIGEV_THREAD] thread is implementation-defined." For example, on Linux (glibc 2.12), if I block SIGUSR1 in main, then contrive to run a SIGEV_THREAD handler from an aio_read call, the handler runs with SIGUSR1 unblocked.
This makes SIGEV_THREAD handlers unsuitable for an application that must reliably and portably handle signals.

Can I add socket to a epoll descriptor while another thread waits on this epoll descriptor?

I have several threads, one of them calls epoll_wait in a loop, others can open connections that need to be epoll'ed by first thread. Is it possible to just add new sockets with epoll_ctl while another thread waits in epoll_wait?
What will happen in the following scenario:
Thread 1 calls epoll_wait.
Thread 2 creates a socket(A) and adds it to epoll instance using epoll_ctl.
Someone sends some data, socket A becomes ready for read() call.
Will epoll_wait return socket A?
Yes, it will. The whole point of an epoll socket is that you don't have to duplicate effort. No snapshotting or use of multiple wait queues is involved.
Under the hood, the epoll socket has its own wait queue. When you block on the epoll socket, you are added to that single wait queue. No state is saved or anything like that. The state is in the epoll socket itself.

Resources