Unclear about process state of a multi-threaded process - multithreading

In a multi-threaded system, is it possible to have one thread of a process to wait on I/O and other thread of the same process to do some other work related to the process?
If it is possible, what can be said about the state of the process? Is it in running state or in blocked state?

What can be said?
The wait for next message thread of the process is waiting.
The process last message of the process is processing.
i.e. you describe the state of the process in terms of the state of it's threads.

Related

Waiting std::condition_variable while forking and forked child process is unable to resume it

I am trying to understand forking with multithreading. So what happens in below scenario ?
Application thread has spawned a thread - polling thread
Application thread runs fork
atpthread_fork handler's pre_fork stops the polling thread using a std::condition_variable. It also waits on a different condition variable to resume the polling
atpthread_fork handler's post_fork in child does cv.notify_one for the waiting poll thread and stops the poll thread
atpthread_fork handler's post_fork in parent does cv.notify_one for the waiting poll thread and resumes the poll thread
But what happens is, post_fork in child enters an infinite loop where it keeps on waiting. This also doesn't seem to notified the poll thread cv at all.
Why is this happening ?
I am trying to understand forking with multithreading.
The #1 thing to understand about combining forking with multithreading is don't do it. The combination is highly problematic other than in a handful of special cases.
So what happens in below scenario ?
Application thread has spawned a thread - polling thread
Application thread runs fork
atpthread_fork handler's pre_fork stops the polling thread using a std::condition_variable. It also waits on a different condition
variable to resume the polling
That makes no sense. A condition variable does not have the power to preemptively make any thread stop. And if the polling thread eventually did stop by blocking on the CV, then what role would a different CV have to play in starting it again?
atpthread_fork handler's post_fork in child does cv.notify_one for the waiting poll thread and stops the poll thread
I suppose you meant to say that a post_fork handler registered via pthread_atfork performs a cv.notify_one in the child to resume the poll thread.
Any way around, it is impossible for the child to do anything with the polling thread because it doesn't have one. The child process has only one thread -- a copy of the one that called fork(). This is one of the main reasons why forking and multithreading don't mix.
atpthread_fork handler's post_fork in parent does cv.notify_one for the waiting poll thread and resumes the poll thread
This seems questionable in light of the overall questionable behavior you are attributing to the CVs, but there is nothing inherently wrong with the concept.
But what happens is, post_fork in child enters an infinite loop where
it keeps on waiting.
Something is missing here. Are you doing a timed wait? Is its wait failing? These are the only ways I can think of that the child could be both looping and waiting. There is initially no other thread in the child process to wake the single one that resulted from the fork, so that thread cannot perform a successful return from a wait on any CV, unless spurriously. There is no one to signal it.
This also doesn't seem to notified the poll
thread cv at all.
Do you mean the one in the child that doesn't exist? Or the one in the parent that probably isn't waiting on the CV you think it's waiting on?
Most of the above is moot. There is absolutely no reason to think that your program is exercising one of the special exceptions, so refer to #1: don't combine forking with multithreading. Choose one.

How is a process waken up if slept in interruptible state?

The kernel code can explicitly put the process to sleep if it's waiting for some task to occur. Now, if the task is put in TASK_INTERRUPTIBLE state, it can wake either by explicit wake up call or by receiving a signal.
Let's say another process issued a signal to a process which is in the wait queue and in TASK_INTERRUPTIBLE state, it will put the process into TASK_RUNNING and the signal will be handled when the process is scheduled next. Is this correct?
An explicit wake up call by other process can also be used to wake up the slept processes. I am wondering how could another process know when the condition became true for the slept process to wake up? Suppose a disk i/o is to be completed and so the process is put to sleep. How could another process know that the i/o is completed? Or is it done by kernel threads?
What am I missing?
It is up to the code that entered the interruptible state to detect the interruption and take appropriate action when it wakes. That might involve the code that is currently handling the user operation completing it with a -ERESTARTSYS error that will be intercepted and dealt with before the system call returns to user mode.
The code that has completed some I/O can just issue a "wake up" to the queue it is responsible for without caring whether there is any task on the queue to be woken up, or the exact condition the task is waiting for.
The task that is woken up needs to decide what to do, and that could include repeating the wait if the the condition it is waiting for had not been satisfied.

What is the difference between thread states and process states?

What I learned is if a process got blocked, it will be swapped out to the disk and wait for wake-up event. But, if a process can have multiple threads, what if a thread is blocked? For example, one of the threads waits for a keyboard eveny, the thread will be blocked. Then will the process also be blocked, or is it possible that only the thread is blocked and process is running?
What I learned is if a process got blocked, it will be swapped out to the disk and wait for wake-up event.
You're probably reading some very old documentation. Likely by "process" it means something scheduled by the kernel.
But, if a process can have multiple threads, what if a thread is blocked? For example, one of the threads waits for a keyboard event, the thread will be blocked. Then will the process also be blocked, or is it possible that only the thread is blocked and process is running?
If you define a "process" as a container that consists of an address space, file descriptor set and so on and that can contain more than one thread, then there is no such thing as a process being blocked. What would block a process exactly?

Do I need to check for my threads exiting?

I have an embedded application, running as a single process on Linux.
I use sigaction() to catch problems, such as segmentation fault, etc.
The process has a few threads, all of which, like the app, should run forever.
My question is whether (and how) I should detect if one of the threads dies.
Would a seg fault in a thread be caught by the application’s sigaction() handler?
I was thinking of using pthread_cleanup_push/pop, but this page says “If any thread within a process calls exit, _Exit, or _exit, then the entire process terminates”, so I wonder if a thread dying would be caught at the process level …
It is not a must that you need to check whether the child thread is completed.
If you have a need of doing something after the child thread completes its processing you can call thread_join() from the main thread, so that it will wait till the child threads completes execution and you can do the rest after this. If you are using thread_exit in the main thread it will get terminated once it is done, leaving the spawned threads to continue execution. The process will get killed only after all the threads completes execution.
If you want to check the status of the spawned threads you can use a flag to detect whether it is running or not. Check this link for more details
How do you query a pthread to see if it is still running?

Threads: some questions

I have couple of questions on threads. Could you please clarify.
Suppose process with one or multiple threads. If the process is prempted/suspended, does the threads also get preempted or does the threads continue to run?
When the suspended process rescheduled, does the process threads also gets scheduled? If the process has process has multiple threads, which threads will be rescheduled and on what basis?
if the thread in the process is running and recieves a signal(say Cntrl-C) and the default action of the signal is to terminate a process, does the running thread terminates or the parent process will also terminate? What happens to the threads if the running process terminates because of some signal?
If the thread does fork fallowed exec, does the exece'd program overlays the address space of parent process or the running thread? If it overlays the parent process what happens to threads, their data, locks they are holding and how they get scheduled once the exec'd process terminates.
Suppose process has multiple threads, how does the threads get scheduled. If one of the thread blocks on some I/O, how other threads gets scheduled. Does the threads scheduled with the parent process is running?
While the thread is running what the current kernel variable points(parent process task_stuct or threads stack_struct?
If the process with the thread is running, when the thread starts does the parent
process gets preempted and how each threads gets scheduled?
If the process running on CPU creates multiple threads, does the threads created by the parent process schedule on another CPU on multiprocessor system?
Thanks,
Ganesh
First, I should clear up some terminology that you appear to be confused about. In POSIX, a "process" is a single address space plus at least one thread of control, identified by a process ID (PID). A thread is an individually-scheduled execution context within a process.
All processes start life with just one thread, and all processes have at least one thread. Now, onto the questions:
Suppose process with one or multiple threads. If the process is prempted/suspended, does the threads also get preempted or does the threads continue to run?
Threads are scheduled independently. If a thread blocks on a function like connect(), then other threads within the process can still be scheduled.
It is also possible to request that every thread in a process be suspended, for example by sending SIGSTOP to the process.
When the suspended process rescheduled, does the process threads also gets scheduled? If the process has process has multiple threads, which threads will be rescheduled and on what basis?
This only makes sense in the context that an explicit request was made to stop the entire process. If you send the process SIGCONT to restart the process, then any of the threads which are not blocked can run. If more threads are runnable than there are processors available to run them, then it is unspecified which one(s) run first.
If the thread in the process is running and recieves a signal(say Cntrl-C) and the default action of the signal is to terminate a process, does the running thread terminates or the parent process will also terminate? What happens to the threads if the running process terminates because of some signal?
If a thread recieves a signal like SIGINT or SIGSEGV whose action is to terminate the process, then the entire process is terminated. This means that every thread in the process is unceremoniously killed.
If the thread does fork followed by exec, does the exece'd program overlays the address space of parent process or the running thread? If it overlays the parent process what happens to threads, their data, locks they are holding and how they get scheduled once the exec'd process terminates.
The fork() call creates a new process by duplicating the address space of the original process, and duplicating just the single thread that called fork() within that new address space.
If that thread in the new process calls execve(), it will replace the new, duplicated address space with the exec'd program. The original process, and all its threads, continue running normally.
Suppose process has multiple threads, how does the threads get scheduled. If one of the thread blocks on some I/O, how other threads gets scheduled. Does the threads scheduled with the parent process is running?
The threads are scheduled independently. Any of the threads that are not blocked can run.
While the thread is running what the current kernel variable points(parent process task_stuct or threads stack_struct?
Each thread has its own task_struct within the kernel. What userspace calls a "thread" is called a "process" in kernel space. Thus current always points at the task_struct corresponding to the currently executing thread (in the userspace sense of the word).
If the process with [a second] thread is running, when the thread starts does the parent process gets preempted and how each threads gets scheduled?
Presumably you mean "the process's main thread" rather than "parent process" here. As before, the threads are scheduled independently. It's unspecified whether one runs before the other - and if you have multiple CPUs, both might run simultaneously.
If the process running on CPU creates multiple threads, does the threads created by the parent process schedule on another CPU on multiprocessor system?
That's really up to the kernel, but the threads are certainly allowed to execute on other CPUs.
Depends. If a thread is preempted because the OS scheduler decides to give CPU time to some other thread, then other threads in the process will continue running. If the process is suspended (i.e. it gets the SIGSTP signal) then AFAIK all the threads will be suspended.
When a suspended process is woken up, all the threads are marked as waiting or blocked (if they are waiting e.g. on a mutex). Then the scheduler at some points run them. There is no guarantee about any specific order the threads are run after waking up the process.
The process will terminate, and with it the threads as well.
When you fork you get a new address space, so there is no "overlay". Note that fork() and the exec() family affect the entire process, not only the thread from which they where called. When you call fork() in a multi-threaded process, the child gets a copy of that process, but with only the calling thread. Then if you call exec() in one or both of the processes (presumably only in the child process, but that's up to you), then the process which calls exec() (and with it, all its threads) is replaced by the exec()'ed program.
The thread scheduling order is decided by the OS scheduler, there is no guarantee given about any particular order.
From the kernel perspective a process is an address space with one or more threads (and some other gunk). There is no concept of threads that somehow exist without a process.
There is no such thing as a process without a single thread. A "plain process" is just a process with a single thread.
Probably yes. This is determined by the OS scheduler. Note that there are API's and tools (numactl) that one can use to force some thread(s) to run on a specific CPU core.
Assuming your questions are about POSIX threads, then
1a. A process that's preempted by the O/S will have all its threads preempted.
1b. The O/S will suspend all the threads of a process that is sent a SIGSTOP.
The O/S will resume all thread of a suspended process that is sent a SIGCONT.
By default, a SIGINT will terminate all the threads in a process.
If a thread calls fork(), then all its threads are duplicated. If it then call one of the exec() functions, then all the duplicated threads disappear.
POSIX allows for user-selection of the thread scheduling algorithm.
I don't understand the question.
I don't understand the question.
How threads are mapped to CPU-s is implementation-dependent. Many implementations will try to distribute threads amongst the available CPU-s to improve performance.
The Linux kernel doesn't distinguish between threads and processes. As far as kernel is concerned, a thread is simply another process which happens to share address space with other processes. (You would call the set of "processes" (i.e. threads) which share a single address space a "process".)
So POSIX threads are scheduled exactly as full-blown processes would be. There is no difference in scheduling whether you have one process with five threads, or five separate processes.
There are kernel calls that provide fine grained control over what is shared between processes. The POSIX threads API wraps over them.

Resources