how the other core call is realized - multithreading

the program is in the memory and first prosesor executes
it, then there is a call to a function which is said to
be executed on the other core - how the first core sends
the call adres to the other core ? Is there some communication
mechanism between the cores other than the shared ram for both?

OK, it's like this. Threads cannot be called, only signaled, and function are not, in general, tied to threads - several threads on several cores may be executing the same function/code.
That said, there is certainly an inter-processor driver that can communicate between cores. This is essential to allow threads to be allocated to cores and also to allow the OS to stop threads when a process terminates.
When inter-core comms is required, the producer thread stores data in shared memory and signals the other core by asserting a hardware-interrupt, so forcing the 'target' core to enter the OS and handle the signaled data.
Essentially, none of this is even remotely trivial and, if you wish to know more/have your brain bent out of shape, look at the scheduler/dispatcher code for linux or sign your life away with M$.

Related

What is the difference between CPU core's thread and any software's application thread?

I have a web application which supports multi threading in which we can run async tasks simultaneously on different thread. I understood what that thread mean.
Now suppose the server on which the application is running has multiple cores CPU with hyper threading enabled.
Now, how my application is supposed to take advantage of these threads. Is there any relation between these two which I am missing.
What i understand from CPU's threads is that
A thread is a single line of commands that are getting processed, each application has at least one thread, most have multiples. A core is the physical hardware that works on the thread. In general a processor can only work on one thread per core, CPUs with hyper threading can work on up to two threads per core.
For processors with hyper threading, there are extra registers and execution units in the core so it can store the state of two threads and work on them both, normally to change threads you have to empty the registers into the cache, write that back to the main memory, then load up the cache with the new values and load up the registers, context switches hurt performance significantly.
But when you have too much backgrounds tasks running, how they are utilizing just limited number of core's threads (i.e. 2 to 8).
PS: I have already checked What is the difference between a process and a thread? and not looking for definition of process. So its not a duplicate.
If you are making use of multiple cores in your program, then the os will schedule which cores run which threads and will take many factors into account, including other processes running, what exactly your code is trying to do, and much more. In regards to async tasks, these may not necessarily be running on a different thread or core, they may be tasks that are not instantaneous, so some scheduler may decide to start doing other things until there is a signal that the async task is complete. It will vary widely depending on the language you are programming the web application in, and the implementation.

There are many threads reserved while golang application running

The golang application is a tool that receives file by invoking a c library, saves it to disk and report the transfer state to monitor service with http protocol.
After a few transferring, I found there are about 70+ threads existed with a few goroutines.
I check the c and go source code, there are no thread or goroutine leak found.
I use "dlv" to debug the application, here is the stack of one of the such threads:
(dlv) bt
0 0x000000000046df03 in runtime.futex
at /home/vagrant/resource/go/src/runtime/sys_linux_amd64.s:388
1 0x0000000000437e92 in runtime.futexsleep
at /home/vagrant/resource/go/src/runtime/os_linux.go:45
2 0x000000000041e042 in runtime.notesleep
at /home/vagrant/resource/go/src/runtime/lock_futex.go:145
3 0x000000000044036d in runtime.stopm
at /home/vagrant/resource/go/src/runtime/proc.go:1594
4 0x0000000000441178 in runtime.findrunnable
at /home/vagrant/resource/go/src/runtime/proc.go:2021
5 0x0000000000441cec in runtime.schedule
at /home/vagrant/resource/go/src/runtime/proc.go:2120
6 0x0000000000442063 in runtime.park_m
at /home/vagrant/resource/go/src/runtime/proc.go:2183
7 0x0000000000469f1b in runtime.mcall
at /home/vagrant/resource/go/src/runtime/asm_amd64.s:240
I don't know where these threads come from or may be threads pool of golang runtime?
Could any one look at this, thank you very much!
The problem
The golang application is a tool that receives file by invoking a c
library, saves it to disk and report the transfer state to monitor
service with http protocol.
After a few transferring, I found there are about 70+ threads existed
with a few goroutines.
The cause
Each call to C (via cgo, or syscall on Windows etc) is no really
different from performing an OS system call as long as the Go scheduler
is concerned.
What happens is this:
When a goroutine is being executed, it runs on an OS thread
(this is sort of obvious, I fathom).
When it performs a syscall or calls C, that goroutine blocks
(stops executing Go code).
The Go runtime scheduler watches after the goroutines which got blocked
and after at east a single "scheduler tick" (which currently — in
Go 1.8 and 1.9 — is 20 µs) passes, and the goroutine is still blocked,
and there are other runnable goroutines,
the scheduler creates another OS thread to make other goroutines
continue execution.
This behaviour might appear to be counter-intuitive at first
but without it, on, say, a two-CPU machine, you would need to just call
two syscalls (such as reading or writing a file) in parallel from
any two goroutines to block the rest of the active goroutines
from doing their work.
In other words, the scheduler tries to keep up with the Go's promise
of always having up to GOMAXPROCS goroutines running
if there are goroutines which want to run, and GOMAXPROCS
is set to the number of CPUs (cores) of the machine.
So, what happens is that if you have a reasonably high churn of C calls which complete slower than that single scheduler tick, you'll have growing pool
of allocated OS threads.
Note that this is not bad in itself: sure, you'll be allocating resources
(on a typical commodity OS each thread has some 8 MiB of stack allocated
plus some bookkeeping data structures internal to the OS) but they are
not wasted: these threads will get reused as soon as they will be needed.
Say, your next burst of such C calls will reuse the allocated threads.
The solution
Still, if you'd like to prevent that from happening, the common approach
is to reasonably serialize your C calls.
A typical approach to this is to have a single "worker" goroutine
which receives "tasks" — in the form of values of some type, usually
a custom type created by you — over a channel and sends the results of
their execution over another channel.
The input channel may be buffered — effectively turning it into a queue.
If you'd still want to parallelize that work, you can have a pool of
worker goroutines — all reading the single input channel and writing to
the single output channel.
But note that if those C calls spend most of their time doing disk I/O
and the files they read/write are located on the filesystem which
is backed by a single medium, you usually won't gain much with
parallelizing unless that medium is blazingly fast — such as SSD or
in-memory (RAM) disk.
So consider all the options and think through your design.

Process with multiple threads on multiprocessor system. How do they work?

So I was reading about Processes and Threads and I had a question. Following is the scenario.
Uniprocessor Environment
I understand that the OS rotates the processes over processor for a particular time period.(quantum) . Now I get it when the process is single threaded, ie just one path of execution. In that case, whenever it is assigned the processor, it continues with it's execution. Let's say the process forks and or just creates a new thread. Now how does the entire process works? Is it that the OS will say to process P "Go on, continue with execution" and the Process within itself will pick the new thread or the parent thread on rotation? So that if there are more than two threads, the rotation seems fair to each thread. Or does the OS actually interacts with the threads? (In that case I am not sure what happens).
Multiprocessor Environment
Now say I have a multiprocessor environment. Now in this case, if there was just uni-threaded process, then OS will assign either of the processors to it and on it will go with it's execution. Now say, there are multiple threads in the Process. Now if I assign one of the processor to the process, and ask it to continue it's execution, and the Process has to pick either of the thread for it's execution, then there never will be parallel processing going on in that specific process. Since the process will have to put either of it's threads on the processor.
So how does it happen in both the cases?
Cheers.
Process Scheduing
Operating Systems ultimately control these types of thread scheduling.
Windows systems are priority-based and so will allow a process to consume more resources that others. This is why your machine can 'hang', if a process has been escalated to a high priority. Priorities are ranged between 1-31 as far as I know.
Mac OS / Linux / Unix are time-based, allowing all processes to have equal amounts of CPU time. Therefore loading more processes will slow your system down as they all share a smaller slice of execution time.
Uniprocessor Environment
The OS is ultimately responsible for this but switching processes involves (I cannot guarantee accuracy here, but its just an indication):
Halting a process / thread
Storing the current stack (code location)
Storing the current registers of the CPU
Asking the kernel for the next process/thread to run
Kernel indicates which one has to be run
OS reloads the registers from the cache
OS reloads the current stack for the next application.
Resumes the process
Obviously the more threads and processes you have running, the slower it will become. The problem is that the time taken to switch processes can actually take longer than the time allowed to execute the process.
Threads are just child processes of a single process. For a single processor, it just looks like additional work.
Multi-processor Environment
Multi-processor environments work differently as the cache is shared amongst processors. I believe these are called L1 (Level) and L2 caches. So the difference is that processor A can reload the state stored by processor B without conflicts. 'Hyper-threading' also has the same approach, although this is processor specific. The difference here is that a processor could solely control a specific process - this is called 'CPU Affinity' Its not encouraged for every process, but it does allow an application to have a dedicated processor to work off.
This is OS-specific, of course, but most operating systems schedule at the thread level. A process is just a grouping of threads. For example, on Linux, threads are called "tasks" and each is scheduled independently. They are created with the clone call. What is typically called a thread is a task which shares its address space (and other resources such as file descriptors, mount points, etc.) with the creating task. Note that the clone call can also create what is typically called a process if the flags to enable sharing are not passed.
Considering the above, any thread may be scheduled at any time on any processor, no matter how many processors there are available. That said, most OSs also attempt to maintain some measure of processor affinity to avoid excessive cache misses, but usually if a thread is runnable and a different CPU is available, it will change CPUs. Often there is also a way to specify which CPUs a particular thread may execute upon.
Doesn't matter whether there is 1 or 128 processors. The OS manages access to resources to try an efficiently match up requests with availabilty, and that includes CPU execution. If a thread is running, it has already managed to get some CPU but, if it requests a resource that is not immediately available, it no longer needs any CPU until that other resource does become free, and so the OS will remove CPU execution from it and, if there is another thread that is waiting for CPU, it will hand it over. When the requested reource does become available, the thread will be made ready again. If there is a core free, it will be made running 'immediately', if not, the CPU scheduling algorithm makes a decision on whether to stop a currently-running thread to free up a core or to leave the newly-ready thrad waiting.
It's better to try and ignore things like 'time-slice, quantum, priority' - it causes much confusion and FUD. If a running thread wants something it cannot have yet, it doesn't need any more CPU cycles, and the OS will take them away and, if another thread needs it, apply them there. That is why preemptive multitaskers exist - to match up threads with resources in an attempt to maximize forward progress.

C# When thread switching will most probably occur?

I was wondering when .Net would most probably switch from a thread to another?
I understand we can't predict when this will happen exactly, but is there any intelligence in this? For example, when a thread is executed will it try to wait for a method to returns or a loop to finish before switching?
I'm not an expert on .NET, but in general scheduling is handled by the kernel.
Either your thread's timeslice has expired (threads/processes only get a certain amount of CPU time)
Your thread has blocked for IO.
Some other obscure reason, like waiting for an IPC message, a network packet or something.
Threads can be preempted at any point along their execution path, be it in a loop or returning from a function. This in general isn't handled by the underlying VM (.NET or JVM) but is controlled by the OS.
Of course there is 'intelligence', of a sort:). The set of running threads can only change upon an interrupt, either:
An actual hardware interrupt from a peripheral device, eg. disk, NIC, KB, mouse, timer.
A software interrupt, (ie. a system call), that can change the state of thread/s. This encompasses sleep calls and calls to wait/signal on inter-thread synchro objects, as well as I/O calls that request data that is not immediately available.
If there is no interrupt, the OS cannot change the set of running threads because it is not entered. The OS does not know or care about loops, function/methods calls, (except those that make system calls as above), gotos or any other user-level flow-control mechanisms.
I read your question now, it may not be rellevant anymore, but after reading the above answers, i want to just to make sure:
Threads are managed (or as i know) by the process they belong to. There is nothing to do with the Operation System(and that's is the main reason why working with multithreads is more faster than working with multiprocess, because there are data sharing between threads and the switching between them is occuring faster than the context switch wich occure between process by the Short-Term-Scheduler).
(NOTE: There are two types of threads: USER_MODE' threads and KERNEL_MODE' threadss, and each os can have both of them or just on of them. Anyway a thread that working in a user application environment is considered as a USER_MODE' thread and managed by the process it's belong to.)
Am I Write?
Thanks!!!

If I "get back to the main thread" then what exactly happens, and how do interrupts work with threads?

Background: I was using Beej's guide and he mentioned forking and ensuring you "get the zombies". An Operating Systems book I grabbed explained how the OS creates "threads" (I always thought it was a more fundamental piece), and by quoting it, I mean it the OS decides nearly everything. Basically they share all external resources, but they split the register and stack spaces (and I think a 3rd thing).
So I get to the waitpid function which http://www.qnx.com's developer docs explain very well. In fact, I read the entire section on threads, minus all the types of conditions after a Processes and Threads google.
The fact that I can split code up and put it back together doesn't confuse me. HOW I can do this is confusing.
In C and C++, your program is a Main() function, which goes forward, calls other functions, maybe loops forever (waiting for input or rendering), and then eventually quits or returns. In this model I see NO reason for it to stop beyond a "I'm waiting for something", in which case it just loops.
Well, it seems it can loop by setting certain things, like "I'm waiting for a semaphore" or "a response" or "an interrupt". Or maybe it gets interrupted without waiting for one. This is what confuses me.
The processor time-slices processes and threads. That's all fine and dandy, but how does it decide when to stop one? I understand that you get to the Polling function and say "Hey I'm waiting for input, clock tick or user do something". Somehow it tells this to the os? I'm not sure. But moreso:
It seems to be able to completely randomly interrupt or interject, even on a single-threaded application. So you're running one thread and suddenly waitpid() says "Hey, I finished a process, let me interrupt this, we both hate zombies, I gotta do this." and you're still looping on some calculation. So, what just happens??? I have no idea, somehow they both run and your computation isn't messed with, 'cause it's single threaded, but that somehow doesn't mean that it won't stop what it's doing to run waitpid() inside the same thread WHILE you're still doing your other app things.
Also confusing, is how you can be notified, like iOSes notifications, and say "Hey, I got some UI changes, get me off of 16 and put me back on 1 so I can change this thing". But same question as last paragraph, how does it interrupt a thread that's running?
I think I understand the splitting, but this joining is utterly confusing. It's like the textbooks have this "rabbit from hat" step I'm supposed to accept. Other SO posts told me they don't share the same stack, but that didn't help, now I'm imagining a slinky (stack) leaning over to another slinky, but unsure how it recombines to change the data.
Thanks for any help, I apologize that this is long, but I know someone's going to misinterpret this and give me the "they are different stacks" answer if I'm too concise here.
Thanks,
OK, I'll have a go, though it's gonna be 'economical with the truth':)
It's sorta like this:
The OS kernel scheduler/dispatcher is a state-machine for managing threads. A thread comprises a stack, (allocated at the time of thread creation), and a Thread Control Block, (TCB), struct in the kernel that holds thread state and can store thread context, (including user registers, especially the stack-pointer). A thread must have code to run, but the code is not dedicated to the thread - many threads can run the same code. Threads have states, eg. blocked on I/O, blocked on an inter-thread signal, sleeping for a timer period, ready, running on a core.
Threads belong to processes - a process must have at least one thread to run its code and has one created for it by the OS loader when the process starts up. The 'main thread' may then create others that will also belong to that process.
The state-machine inputs are software interrupts - system calls from those threads that are already running on cores, and hardware interrupts from perhiperal devices/controllers, (disk, network, mouse, KB etc), that use processor hardware features to stop the processor/s running instructions from the threads and 'immediately' run driver code instead.
The output of the state-machine is a set of threads running on cores. If there are fewer ready threads than cores, the OS will halt the unuseable cores. If there are more ready threads than cores, (ie. the machine is overloaded), the 'sheduling algorithm' that decided with threads to run takes into account several factors - thread and process priority, prority boosts for threads that have just become ready on I/O completion or inter-thread signal, foreground-process boosts and others.
The OS has the ability to stop any running thread on any core. It has an interprocessor hardware-interrupt channel and drivers that can force any thread to enter the OS and be blocked/stopped, (maybe because another thread has just beome ready and the OS scheduling algorithm has decided that a running thread must be immediately preempted).
The software intrrupts from running threads can change the set of running threads by requesting I/O, or by signaling other threads, (the events, mutexes, condition-variables and semaphores). The hardware interrupts from peripheral devices can change the set of running threads by signaling I/O completion.
When the OS gets these inputs, it uses that input, and internal state in containers of Thread Control Block and Process Control Block structs, to decide which set of ready threads to run next. It can block a thread from running by saving its context, (including registers, especially stack pointer), in its TCB and not returning from the interrupt. It can run a thread that was blocked by restoring its context from its TCB to a core and performing an interrupt-return, so allowing the thread to resume from where it left off.
The gain is that no thread that is waiting for I/O gets to run at all and so does not use any CPU and, when I/O becomes avilable, a waiting thread is made ready 'immediately' and, if there is a core available, running.
This combination of OS state data, and hardware/software interrupts, effciently matches up threads that can make forward progress with cores avalable to run them, and no CPU is wasted on polling I/O or inter-thread comms flags.
All this complexity, both in the OS and for the developer who has to design multithreaded apps and so put up with locks, synchronization, mutexes etc, has just one vital goal - high performance I/O. Without it, you can forget video streaming, BitTorrent and browsers - they would all be too piss-slow to be useable.
Statements and phrases like 'CPU quantum', 'give up the remainder of their time-slice' and 'round-robin' make me want to throw up.
It's a state-machine. Hardware and software interrupts go in, a set of running threads comes out. The hardware timer interrupt, (the one that can time-out system calls, allow threads to sleep and share out CPU on a box that is overloaded), though valuable, is just one of many.
So I'm on thread 16, and I need to get to thread 1 to modify UI. I
randomly stop it anywhere, "move the stack over to thread 1" then
"take its context and modify it"?
No, time for 'economical with truth' #2...
Thread 1 is running the GUI. To do this, it needs inputs from mouse, keyboard. The classic way for this to happen is that thread 1 waits, blocked, on a GUI input queue - a thread-safe producer-consumer queue, for KB/mouse messages. It's using no CPU - the cores are off running services and BitTorrent downloads. You hit a key on the keyboard, and the keyboard-controller hardware raises an interrupt line on the interrupt controller, causing a core to jump to the keyboard driver code as soon as it has finished its current instruction. The driver reads the KB controller, assembles a KeyPressed message and pushes it onto the input queue of the GUI thread with focus - your thread 1. The driver exits by calling the scheduler interrupt entry point so that a scheduling run can be performed and your GUI thread is assigned a core an run on it. To thread 1, all it has done is make a blocking 'pop' call on a queue and, eventually, it returns with a message to process.
So, thread 1 is performing:
void* HandleGui{
while(true){
GUImessage message=thread1InputQueue.pop();
switch(message.type){
.. // lots of case statements to handle all the possible GUI messages
..
..
};
};
};
If thread 16 wants to interact with the GUI, it cannot do it directly. All it can do is to queue a message to thread 1, in a similar way to the KB/mouse drivers, to instruct it to do stuff.
This may seem a bit restrictive, but the message from thread 16 can contain more than POD. It could have a 'RunMyCode' message type and contain a function pointer to code that thread 16 wants to be run in the context of thread 1. When thread 1 gets around to hadling the message, its 'RunMyCode' case statement calls the function pointer in the message. Note that this 'simple' mechanism is asynchronous - thread 16 has issued the mesage and runs on - it has no idea when thread 1 will get around to running the function it passed. This can be a problem if the function accesses any data in thread 16 - thread 16 may also be accessing it. If this is an issue, (and it may not be - all the data required by the function may be in the message, which can be passed into the function as a parameter when thread 1 calls it), it is possible to make the function call synchronous by making thread 16 wait until thread 1 has run the function. One way would be for the function signal an OS synchronization object as its last line - an object upon which thread 16 will wait immediately after queueing its 'RunMyCode' message:
void* runOnGUI(GUImessage message){
// do stuff with GUI controls
message.notifyCompletion->signal(); // tell thread 16 to run again
};
void* thread16run(){
..
..
GUImessage message;
waitEvent OSkernelWaitObject;
message.type=RunMyCode;
message.function=runOnGUI;
message.notifyCompletion=waitEvent;
thread1InputQueue.push(message); // ask thread 1 to run my function.
waitEvent->wait(); // wait, blocked, until the function is done
..
..
};
So, getting a function to run in the context of another thread requires cooperation. Threads cannot call other threads - only signal them, usually via the OS. Any thread that is expected to run such 'externally signaled' code must have an accessible entry point where the function can be placed and must execute code to retreive the function address and call it.

Resources