I am trying to revise my Operating System concepts, but I had some confusions. I know that a process is a thread with its own address space.
1) Are deadlocks only caused by threads or processes? (Threads share the process's stack, where as different processes have different stacks).
2) Can a single process cause a deadlock? or does it take more than one process for a deadlock to occur?
I am not sure if this is the right place to ask this. If not, please let me know and I will delete the question.
Both threads AND processes can get into deadlocks depending on what they are trying to lock. If the resource that they want to lock is a resource that's shared within a process (e.g. critical section), threads can get into deadlock. On the other hand if it's a resource that's shared globally (e.g. named mutex), processes can get into a deadlock. For 2), there must be more than one process involved since more than one process must try to lock (globally) shared resource in order for a deadlock to occur.
The answer lies in your Question itself. Each Process has a stack and all the threads created by the process share the stack. whenever two threads of the same process request for a resource(data,comm,...) that other threads has a lock to and in-turn waits for a release of other resource then deadlocks occur.
answer:
for 1):
threads cause deadlocks within process and process cause deadlocks within parent process (in most situations OS)
for 2):
yes a single process can cause deadlocks.
Related
It is well-known that the default way to create a new process under POSIX is to use fork() (under Linux this internally maps to clone(...))
What I want to know is the following: It is well-known that when one calls fork() "The child process is created with a single thread--the one that called fork()"
(cf. https://linux.die.net/man/2/fork). This can of course cause problems if for example some other thread currently holds a lock. To me not also forking all the threads that exist in the process intuitively feels like a "leaky abstraction".
So I would like to know: What is the reason why only the thread calling fork() will exist in the child process instead of all threads of the process? Is there a good technical reason for this?
I know that on Multithreaded fork there is a related question, but the answers given there don't answer mine.
Of these two possibilities:
only the thread calling fork() continues running in the child process
Downside: if another thread was holding on to an internal resource such as a lock, it will not be released.
after fork(), all threads are duplicated into the child process
Downside: threads that were interacting with external resources continue running in parallel. If a thread was appending data to a file: now it happens twice.
Both are bad, but the first one choice only deadlocks the new child process, while the second choice results in corruption outside of the process. This could be described as "bad".
POSIX did standardize pthread_atfork to try to allow automatic cleanup in the first case, but it cannot possibly work.
tl;dr Don't use both threads and forks. Use posix_spawn if you have to.
I read one of the differences between semaphore and mutex is in case of mutex the process/thread (which ever is having the lock) can only release the lock. But in the case of the semaphore any other process can release the semaphore. My doubt arises when a process that does not have the semaphore with it can release the semaphore. What is the use of having a semaphore?
Let's say I have two processes A and B. Assume process A is having a semaphore with it and executing some critical task. Now let us say process B sends a signal to release the semaphore. In this scenario, will process A release the semaphore even if it is executing some critical task?
You are making half-sense. It is not about ownership. Partner-release in semaphores (and mutexes) is usable, for instance, in my favorite interview question of thread ping-pong. As a matter of fact, I have specifically tried to partner-release a mutex on 3 implementations available to me at a time (Linux/Solaris/AIX) and partner-release did work for mutexes as expected - i.e. mutex was successsfully released and threads blocking on it resumed execution. However, this is, of course, prohibited by Posix.
I think you might be confused on the whole set of differences between a semaphore and a mutex. A mutex provides mutual exclusion. A semaphore counts until it reaches a level where it starts excluding. A semaphore that counted to one would give similar semantics to a mutex though.
A good example would be a television set. Only so many people can watch the same television set, so protecting it with a semaphore would make sense. Anyone can stop watching the television. The remote control for the television can only be operated by one person at a time though, so you could protect it with a mutex.
Some reading...
https://en.wikipedia.org/wiki/Mutual_exclusion
https://en.wikipedia.org/wiki/Semaphore_%28programming%29
"Let's say I have two processes A and B. Assume process A is having a semaphore with it and executing some critical task. Now let us say process B sends a signal to release the semaphore. In this scenario, will process A release the semaphore even if it is executing some critical task?"
One key point to note here is the role of OS kernel. Process B can't send a signal to Process A 'to release the semaphore'. What it can do is request the kernel to give it access to the resource. Process A had requested the kernel and the kernel granted it access to the resource.
Now process A, after it finishes its job, will let the kernel know that it is done with the resource and then kernel grants access to B.
"My doubt arises when a process that does not have the semaphore with it can release the semaphore. What is the use of having a semaphore?"
The key difference between a mutex and a semaphore is, a semaphore serializes access to multiple instances of a resource. Mutex does the same when there is one instance of the resource.
A count is maintained by kernel in case of semaphore and mutex is a special case where the count is 1.
Consider the processes as customers waiting in line at a bank.
The use of semaphore is analogous to the case where there are multiple tellers serving the customers. Usage of mutex is analogous to the case where there is just one teller.
Say there are processes A, B and C that need concurrent access to a resource (lock, file or a data structure in memory, etc.). Further suppose there are 2 instances of the resource. So at most two processes can be granted access at a time.
Process A requests access to an instance of the resource following the required semantics. This request to the kernel involves data structures to identify the resource and maximum number of instances as 2. kernel creates the semaphore with a count of 2, grants A access to the resource and decrements the count to 1, because now only one other process can get access.
Now process B requests access to the resource by following the same semantics. Kernel grants it access and decrements the count to 0.
Now process C requests access, but kernel keeps it in waiting state, because count is 0 and no more than 2 processes can get concurrent access.
Process A is done with the resource and lets kernel know. Kernel notices this and grants access to process C that has been waiting.
In case of mutex, kernel grants access to the resource only one process at a time.
A normal binary semaphore is basically used for synchronization. However, the mutex is for exclusive access to a resource. A mutex is a special variant of semaphore that allows only one locker at a time and with more stringency on ownership than a normal semaphore such as the mutex should be released only by the thread that acquired it. Also, please note that in case of pthreads, fast mutex may not check for this error related to ownership, whereas the error checking mutex shall return error.
For the query related to 2 process A and B, the Process A shall intimate via kernel that it is done with its critical work so that the resource can be made available for waiting processes like B.
You could find some related information in this link too :
When should we use mutex and when should we use semaphore
There is no such thing as "having" a semaphore. Semaphores don't have ownership like mutexes do. The code you describe would simply be buggy. Mutexes won't work if your code is buggy either.
Consider the most classic example of a semaphore -- allowing one train at a time on a section of track. You could implement this with a mutex if the train is a thread. The train would lock the track mutex before going on the track and unlock it after leaving the track.
But what if the train itself is multi-threaded? Which thread should own the track?
And what if the signalling devices are the threads, not the train? Here, the signalling device that detects the train entering the track has to lock the track while the signalling device that detects the train leaving the track has to unlock it.
Mutexes are suitable for cases where there is something that is owned by a particular thread for a short period of time. That thread can "own" the mutex. Semaphores are useful for cases where there is no thread to own anything or nothing for the thread to own.
I was asked this interview question. I replied that thread is the process after thinking that process is a superset of thread but interviewer didn't agree with it. It is confusing and I'm not able to find any clear answer to this.
A process is an executing instance of an application.
A thread is a path of execution within a process.
Also, a process can contain multiple threads.
1.
It’s important to note that a thread can do anything a process can do.
But since a process can consist of multiple threads, a thread could be
considered a ‘lightweight’ process. Thus, the essential difference
between a thread and a process is the work that each one is used to
accomplish. Threads are used for small tasks, whereas processes are
used for more ‘heavyweight’ tasks – basically the execution of
applications.
2.
Another difference between a thread and a process is that threads
within the same process share the same address space, whereas
different processes do not. This allows threads to read from and write
to the same data structures and variables, and also facilitates
communication between threads. Communication between processes – also
known as IPC, or inter-process communication – is quite difficult and
resource-intensive.
I feel like this is a terrible question.
Both are independent blocks of execution
Both are scheduled by the operating system
Threads run within the context of a process, share memory with the process.
I can't think of a time where a thread would have it's own address space
By that logic I would agree with your answer that a thread is a process. I think its kind of a loaded question. I would have asked you to explain the differences between the two.
For more information here's a good thread to view on the subject.
Every process is a thread, but not every thread is a process.
A thread is just an independet sequence of operations. A process has an additional context.
The nature of a thread is highly system dependent. For example, some systems implement threads as part of the operating system. Other system implement threads through a run-time library. The process itself manages its own threads (not the OS) and the management may be different for different processes (e.g., Java threading implemented differently from Ada threading).
In OS-scheduled threads, a thread and a process are different terms. A process is an address space with multiple, schedulable threads of execution.
In RTL-scheduled threads, the process is a thread.
Can fork() function be used to replicate a multithreaded process. And if so, will all threads be exactly the same and if not, why not. If replication can't be done through fork, is there any other function which can do it for me?
After a fork, only one thread is running in the child. This is a POSIX standard requirement. See the top answer to the question fork and existing threads ?.
No, the child will only have one thread. Forking a threaded process is not trivial. (See this article Threads and fork(): think twice before mixing them for a good rundown).
I don't know of any way of cloning a process and all its threads, I don't think that's possible on Linux.
No.
A fork creates a new process with his own thread(s), copies the file descriptor and the virtual memory.
A child process does NOT share the same memory with his father. So this is absolutely not the same.
Suppose that one of the other threads (any thread other than the one doing the fork( )) has the job of deducting money from your checking account.
POSIX defined the behavior of fork( ) in the presence of threads to propagate only the forking thread.
If the other thread has a mutex locked, the mutex will be locked in the child process, but the lock owner will not exist to unlock it. Therefore, the resource protected by the lock will be permanently unavailable.
http://www.doublersolutions.com/docs/dce/osfdocs/htmls/develop/appdev/Appde193.htm
i had these questions in my mind since i was reading some new topics on processes and threads. I would be glad if somebody could help me out.
1) What happens if a thread is marked uncancelable, and then the process is killed inside of the critical section?
2) Do we have a main thread for the program that is known to the operating system? i mean does the operating system give the first thread of the program some beneficial rights or something?
3) When we kill a process and the threads are not joind, do they become zombies?
First, don't kill or cancel threads, ask them to kill themselves. If you kill a thread from outside you never know what side effects - variables, state of synchronization primitives, etc.- you leave behind. If you find it necessary for one thread to terminate another then have the problematic thread check a switch, catch a signal, whatever, and clean up its state before exiting itself.
1) If by uncancelable you mean detached, the same as a joined thread. You don't know what mess you are leaving behind if you are blindly killing it.
2) From an application level viewpoint the primary thing is that if the main thread exits() or returns() it is going to take down all other threads with it. If the main thread terminates itself with pthread_exit() the remaining threads continue on.
3) Much like a process the thread will retain some resources until it is reaped (joined) or the program ends, unless it was run as detached.
RE Note: The threads don't share a stack they each have their own. See clone() for some info on thread creation.