What is going to happen if my pending signals limit is exceeded? - linux

I'm blocking certain signals in my process, and I'm wondering what is going to happen if the pending signals limit is exceeded.
Do the new ones get lost or is my process going to crash?

Related

POSIX message queue - mq_send thread wake order

Can someone explain to me how message queues handle waking multiple
threads blocked on a single message queue?
My situation is I have multiple writers blocking on a full message
queue, each posting messages with priority equal to the thread
priority. I want to make sure they wake and post in priority order,
however my application is behaving as if they are waking in FIFO order
(i.e. the order in which they blocked). Each blocking thread is
scheduled with the SCHED_FIFO policy with a different priority with
system level scope.
I've searched the Internet high and low for something describing how
this should work and all I can find is POSIX man pages describing that
multiple blockers wake in priority order if Priority Scheduling is
supported. Since the kernel scheduler is a priority scheduler I
would think that the threads would wake in priority order and post to
the queue, however that doesn't appear to be the case. I'm sure I'm
just missing some subtle detail and was hoping the experts here on
this list can help shine some light on what I'm seeing, since its at
the kernel level that these threads are made ready to run.
I have a small test application that I can post here if necessary. It simply fills a queue, then has a few threads all try and write to it, all with different thread priorities and posting with a message priority equal to the thread priority. I then remove a message from the queue and would expect the highest priority thread wake up and post its message. However, the first thread to wait posts its message first.
Any help or documentation anyone can point me to in order to get to the bottom of this?
Thanks in advance!
Turns out the Linux kernel looks at the tasks' priority value if the queue is full and adds them to a wait queue in task niceness order (which is priority order for non-RT tasks). The wait queue does not honor real-time priorities which my application uses. Non-RT priorities (nice values) are being handled properly and wake up in niceness order.
The root cause of my issue down in how the kernel was handling priorities when adding tasks to the internal kernel wait queues. I submitted a patch to the linux-kernel list which was accepted and will be rolled into future releases which changes the priority check when adding tasks to the wait queue - it now honors both non-RT priority as well as RT priority. It does NOT handle priorities of deadline scheduled tasks.

vxworks message queue "lost" a task blocked on it. What can be a reason?

We have a quite large multitasking communication system implemented on Vxworks 5.5 and PPC8260. The system should handle a lot of Ethernet traffic and also handle some cyclic peripheral control activities via RS-232, memory mapped I/O etc. What happens is that in some moment few message queues we are using for inter task communication become overflowed (I see it by log inspection). When I check the status of tasks responsible for serving this Message queues (that is doing receive on them) they appear to be READY.When I inspect msgQShow for the queues themselves they are full but no tasks appear to be blocked on them. But looking at task stack trace shows that a task actually pending inside msgQReceive call.Specifically in the qJobGet kernel call or something alike.
It is unlikely in the extreme that a message queue "lost" a task that was blocked on it.
From your description we can assume:
A message Queue has overflowed. Presumably you have detected this by checking the return value from msgQSend, which has been invoked either with a timeout value, or NO_WAIT.
msgQShow confirms the Q is full
Tasks that should be reading the queue are in the READY state.
The READY state is the state that tasks are in when they are available to run. Tasks are held in a queue (strictly, one queue per priority level), and when the reach the head of the queue they will be scheduled.
If tasks are persistently showing as READY, that suggests that they are not getting CPU time. The fact that the msgQ does not appear to empty supports that.
You should use tools such as system viewer to diagnose. You may need to raise the priority of the reader tasks. If your msgQSend is using NO_WAIT, you may need to use a timeout value

Can I thread hang in ::poll

We are seeing a big amount of ptypes posix threads hanging in socket poll even though the call is made with a timeout of 200 ms. Any ideas what can lead to such a situation?

Process & Threads states after termination / blocked

Say I have a running process with 2 running threads, what happen to the threads if the process is terminated? do they get terminated as well?
Also, what happen to the threads if the process is 'losing' the CPU attention (Other process took the CPU's attention) and therefore the process is in waiting state / suspended. Do its threads keep running? or?
thanks
Yes, the threads get disposed of by the operating system if and when the process terminates and gets cleaned up.
The threads in a process are scheduled according to their priority. In most OSes that is fairly straight forward, it's a simple matter of the higher priority threads winning. Windows does something different, a process's priority is adjusted by Windows according to whether it is the foreground process. Thus the process's threads' priorities are also adjusted...

Loss of signals

I have this issue with signals being lost. I mean I have this system, where signals are generated by child processes and received by other child processes of a parent process. I have used sigwait and sigprocmask to actually block and then wait for signals inside the signal receiving child processes rather than registering an asynchronous handler.
Now when I run this system. I can see that initially, the generated signals from the child processes are blocked by the receiving child processes and then using sigwait they actually process these pending signals. So the signals are pending and then fetched using sigwait and it goes on.
But as the time passes, I could see that signals are not consumed as much as before. I mean there are lots of signals generated and they are not being processed by the receiving processes. Is it possible that if I have lots of signals pending then it could result in the signals being lost?
Only real-time signals (those between SIGRTMIN and SIGRTMAX), if your OS supports them, are guaranteed to be queued (up to a maximum of SIGQUEUE_MAX queued signals). Other signals may be lost if the receiving process already has a pending signal with the same code.
From the specification for sigaction:
If a subsequent occurrence of a pending signal is generated, it is implementation-dependent as to whether the signal is delivered or accepted more than once in circumstances other than those in which queueing is required under the Realtime Signals Extension option.
Pending signals are not queued which will result in them being lost if you don't explicitly check for them or handle them. That's why you should check that all children terminate.
Sources: Link to some old lecture material.

Resources