Multithreading address space - multithreading

Threads have their own call stack, then what kind of memory do different threads share. Do they have their own stack memory within the address space of a process? Is that memory sufficient for spawning 100s of threads? If a process has an object B, in case of Java it will be created on the heap. So, how are threads spawned by that process able to have access to that object on the heap ?

kind of memory do different threads share
"All the process (user mode) memory" is available on all threads, this means that you can share objects stored a thread stack to other thread.
Do they have their own stack memory within the address space of a
process
Yes each thread have its own stack for running.
Is that memory sufficient for spawning 100s of threads?
Yes check http://msdn.microsoft.com/en-us/library/windows/desktop/ms686774(v=vs.85).aspx
So, how are threads spawned by that process able to have access to
that object on the heap ?
I think that I answer that in the first question

Related

Dynamic variable declaration inside a thread

As I got to know that apart from data segment and code segment threads also share heap segment
What resources are shared between threads??
then if I create a variable dynamically using malloc() or calloc() inside the thread then does that variable would be accessible to all the other threads of the same process?
Theoretically, if you know the memory address. Yes, heap allocated variables should be accessible from any thread within the same process.
{malloc, calloc, realloc, free, posix_memalign} of glibc-2.2+ are thread safe
http://linux.derkeiler.com/Newsgroups/comp.os.linux.development.apps/2005-07/0323.html
Original post
Generally, malloc/new/free/delete on multi threaded systems are thread safe, so this should be no problem - and allocating in one thread , deallocating in another is a quite common thing to do.
As threads are an implementation feature, it certainly is implementation dependant though - e.g. some systems require you to link with a multi threaded runtime library.
And this
Besides also answered in: the link you posted
Threads differ from traditional multitasking operating system
processes in that:
processes are typically independent, while threads exist as subsets of
a process processes carry considerable state information, whereas
multiple threads within a process share state as well as memory and
other resources processes have separate address spaces, whereas
threads share their address space processes interact only through
system-provided inter-process communication mechanisms. Context
switching between threads in the same process is typically faster than
context switching between processes.
So, yes it is.

What are the disadvantages of threads over process?

-Interview Question
I was asked the disadvantages of thread. And what are the scenario where we shouldn't use thread instead use process?
I couldn't think much except invalid memory access in some case.
Threads, spawned by the same process, all share the same memory. Processes all run in their own memory context.
In Linux (I don't know what the behavior under Windows is like) a newly spawned child process will usually received a copy of certain part the parent process' memory context an therefore is more expensive memory-wise at runtime and CPU-time/MMU wise at creation. Also context switching - (off)loading the process from or to the CPU (this happens, when a process or thread has nothing to do and is pushed to a queue in favor of processes or threads with actual work) - might be more expensive with a process.
On the other hand processes might be much more secure since their memory is isolated from the memory of their sibling processes.

How can there be multiple call stacks allocated at the same time? How does the stack pointer change between threads?

Summary of my understanding:
The top memory addresses are used for the? (I initially thought there was only one call stack) stack, and the? stack grows downwards (What and where are the stack and heap?)
However, each thread gets it's own stack allocated, so there should be multiple call stacks in memory (https://stackoverflow.com/a/80113/2415178)
Applications can share threads (e.g, the key application is using the main thread), but several threads can be running at the same time.
There is a CPU register called sp that tracks the stack pointer, the current stack frame of a call stack.
So here's my confusion:
Do all of the call stacks necessary for an application (if this is even possible to know) get allocated when the application gets launched? Or do call stacks get allocated/de-allocated dynamically as applications spin off new threads? And if that is the case, (I know stacks have a fixed size), do the new stacks just get allocated right below the previous stacks-- So you would end up with a stack of stacks in the top addresses of memory? Or am I just fundamentally misunderstanding how call stacks are being created/used?
I am an OS X application developer, so my visual reference for how call stacks are created come from Xcode's stack debugger:
Now I realize that how things are here are more than likely unique to OS X, but I was hoping that conventions would be similar across operating systems.
It appears that each application can execute code on multiple threads, and even spin off new worker threads that belong to the application-- and every thread needs a call stack to keep track of the stack frames.
Which leads me to my last question:
How does the sp register work if there are multiple call stacks? Is it only used for the main call stack? (Presumably the top-most call stack in memory, and associated with the main thread of the OS) [https://stackoverflow.com/a/1213360/2415178]
Do all of the call stacks necessary for an application (if this is even possible to know) get allocated when the application gets launched?
No. Typically, each thread's stack is allocated when that thread is created.
Or do call stacks get allocated/de-allocated dynamically as applications spin off new threads?
Yes.
And if that is the case, (I know stacks have a fixed size), do the new stacks just get allocated right below the previous stacks-- So you would end up with a stack of stacks in the top addresses of memory? Or am I just fundamentally misunderstanding how call stacks are being created/used?
It varies. But the stack just has to be at the top of a large enough chunk of available address space in the memory map for that particular process. It doesn't have to be at the very top. If you need 1MB for the stack, and you have 1MB, you can just reserve that 1MB and have the stack start at the top of it.
How does the sp register work if there are multiple call stacks? Is it only used for the main call stack?
A CPU has as many register sets as threads that can run at a time. When the running thread is switched, the leaving thread's stack pointer is saved and the new thread's stack pointer is restored -- just like all other registers.
There is no "main thread of the OS". There are some kernel threads that do only kernel tasks, but also user-space threads also run in kernel space to run the OS code. Pure kernel threads have their own stacks somewhere in kernel memory. But just like normal threads, it doesn't have to be at the very top, the stack pointer just has to start at the highest address in the chunk used for that stack.
There is no such thing as the "main thread of the OS". Every process has its own set of threads, and those threads are specific to that process, not shared. Typically, at any given point in time, most threads on a system will be suspended awaiting input.
Every thread in a process has its own stack, which is allocated when the thread is created. Most operating systems will leave some space between each stack to allow them to grow if needed, and to prevent them from colliding with each other.
Every thread also has its own set of CPU registers, including a stack pointer (pointing to a location in that thread's stack).

Stack for threads of a process in Linux

How is stack space allocated (in the same address space) to each thread of a process in Linux or any other OS for that matter?
It depends on the type of thread library, a user space library like pthreads would allocate memory and divide it into thread stacks. On the OS side each thread would get a kernel stack.
On creation of new thread, the operating system reserves space in stack segment for current thread (parent), where the future auto variables and function call data of parent will live. Then, it allocates one guard page (this is to prevent the parent colliding into child stack, but this may vary with different operating systems). Once this is done, the stack frame for child thread is created (which is typically one-two page(s)).
This process is repeated in case the parent spawns multiple threads. All these stack frames live in stack segment of address space of process whose all these threads are part of.

where thread is implemented in memory?

We know that thread has its own stack it's implemented within the process. But my question is that when thread is implemented in his own stack that time it is the same stack which used by process or any other function?
One more doubt that thread share it's global variable,file descriptor, signal handler etc. But how it's share all these parameters within same address where all the threads executed?
Brief explanation will be appreciated.
when thread is implemented in his own stack that time it is the same stack which used by process or any other?
Can't quite parse this but I get the gist I think.
In most cases, under Linux in a multithreaded application, all of the threads share the same address space. Each thread if it is running on a separate processor may have local cached memory but the overall address space is shared by all threads. Even per-thread stack space is shared by all threads -- just that each thread gets a different contiguous memory area.
But how it's share all these parameters within same address?
This is also true of the global variables, file descriptors, etc.. They are all shared.
Most thread implementations running under Linux use the clone(2) syscall to create new thread processes. To quote from the clone man page:
clone() creates a new process, in a manner similar to fork(2). It is actually a library function layered on top of the underlying clone() system call, hereinafter referred to as sys_clone. A description of sys_clone is given toward the end of this page.
Unlike fork(2), these calls allow the child process to share parts of its execution context with the calling process, such as the memory space, the table of file descriptors, and the table of signal handlers.
You can see the cloned processes by using ps -eLf under Linux.

Resources