Thread id of QtConcurrent run - multithreading

I am doing multi-thread programs with QT.
I use this code to try whether it acts as i expected.
QFuture<void> t1 = QtConcurrent::run(thread_process1, (void *)this);
QFuture<void> t2 = QtConcurrent::run(thread_process2, (void *)this);
and both thread_process1 and 2 are only one line which is
qDebug()<<"thread id: "<<QString("%1").arg((int) QThread::currentThreadId(), 0, 16) ;
however, they both show
thread id: "ffffffffb6085b40"
am I do it wrong??
QFutureWatcher seems to do no help.

The docs for run say,
Runs function in a separate thread. The thread is taken from the
global QThreadPool. Note that the function may not run immediately;
the function will only be run when a thread is available.
There is no guarantee that each call to run will run in a different thread. It is possible that the functions run so quickly they are both handled sequentially by the same thread. Try putting in a sleep call in thread_process_1 to see if the functions are then picked up by different threads.

Related

Why doesn't Go's LockOSThread lock this OS thread?

The documentation for runtime.LockOsThread states:
LockOSThread wires the calling goroutine to its current operating system thread. Until the calling goroutine exits or calls UnlockOSThread, it will always execute in that thread, and no other goroutine can.
But consider this program:
package main
import (
"fmt"
"runtime"
"time"
)
func main() {
runtime.GOMAXPROCS(1)
runtime.LockOSThread()
go fmt.Println("This shouldn't run")
time.Sleep(1 * time.Second)
}
The main goroutine is wired to the one available OS thread set by GOMAXPROCS, so I would expect that the goroutine created on line 3 of main will not run. But instead the program prints This shouldn't run, pauses for 1 second, and quits. Why does this happen?
From the runtime package documentation:
The GOMAXPROCS variable limits the number of operating system threads that can execute user-level Go code simultaneously. There is no limit to the number of threads that can be blocked in system calls on behalf of Go code; those do not count against the GOMAXPROCS limit.
The sleeping thread doesn't count towards the GOMAXPROCS value of 1, so Go is free to have another thread run the fmt.Println goroutine.
Here's a Windows sample that will probably help you understand what's going on. It prints thread IDs on which goroutines are running. Had to use syscall so it works only on Windows. But you can easily port it to other systems.
package main
import (
"fmt"
"runtime"
"golang.org/x/sys/windows"
)
func main() {
runtime.GOMAXPROCS(1)
runtime.LockOSThread()
ch := make(chan bool, 0)
go func(){
fmt.Println("2", windows.GetCurrentThreadId())
<- ch
}()
fmt.Println("1", windows.GetCurrentThreadId())
<- ch
}
I don't use sleep to prevent runtime from spawning another thread for sleeping goroutine. Channel will block and just remove goroutine from running queue. If you execute the code you will see that thread IDs are different. Main goroutine locked one of the threads so the runtime has to spawn another one.
As you already know GOMAXPROCS does not prevent the runtime from spawning more threads. GOMAXPROCS is more about the number of threads that can execute goroutines in parallel. But more threads can be created for goroutines that are waiting for syscall to complete, for example.
If you remove runtime.LockOSThread() you will see that thread IDs are equal. That's because channel read blocks the goroutine and allows the runtime to yield execution to another goroutine without spawning new thread. That's how multiple goroutines can execute concurrently even when GOMAXPROCS is 1.
GOMAXPROCS(1) which causes you to have one ACTIVE M (OS thread) to be present to server the go routines (G).
In your program there are two Go routines, one is main, and the other is fmt.Println. Since main routine is in sleep, M is free to run any go routine, which in this case fmt.Println can run.
That looks like correct behavior to me. From what I understand the LockOSThread() function only ties all future go calls to the single OS thread, it does not sleep or halt the thread ever.
Edit for clarity: the only thing that LockOSThread() does is turn off multithreadding so that all future GO calls happen on a single thread. This is primarily for use with things like graphics API's that require a single thread to work correctly.
Edited again.
If you compare this:
func main() {
runtime.GOMAXPROCS(6)
//insert long running routine here.
go fmt.Println("This may run almost straight away if tho long routine uses a different thread")
}
To this:
func main() {
runtime.GOMAXPROCS(6)
runtime.LockOSThread()
//insert long running routine here.
go fmt.Println("This will only run after the task above has completed")
}
If we insert a long running routine in the location indicated above then the first block may run almost straight away if the long routine runs on a new thread, but in the second example it will always have to wait for the routine to complete.

Understanding Threads Swift

I sort of understand threads, correct me if I'm wrong.
Is a single thread allocated to a piece of code until that code has completed?
Are the threads prioritised to whichever piece of code is run first?
What is the difference between main queue and thread?
My most important question:
Can threads run at the same time? If so how can I specify which parts of my code should run at a selected thread?
Let me start this way. Unless you are writing a special kind of application (and you will know if you are), forget about threads. Working with threads is complex and tricky. Use dispatch queues… it's simpler and easier.
Dispatch queues run tasks. Tasks are closures (blocks) or functions. When you need to run a task off the main dispatch queue, you call one of the dispatch_ functions, the primary one being dispatch_async(). When you call dispatch_async(), you need to specify which queue to run the task on. To get a queue, you call one of the dispatch_queue_create() or dispatch_get_, the primary one being dispatch_get_global_queue.
NOTE: Swift 3 changed this from a function model to an object model. The dispatch_ functions are instance methods of DispatchQueue. The dispatch_get_ functions are turned into class methods/properties of DispatchQueue
// Swift 3
DispatchQueue.global(qos: .background).async {
var calculation = arc4random()
}
// Swift 2
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) {
var calculation = arc4random()
}
The trouble here is any and all tasks which update the UI must be run on the main thread. This is usually done by calling dispatch_async() on the main queue (dispatch_get_main_queue()).
// Swift 3
DispatchQueue.global(qos: .background).async {
var calculation = arc4random()
DispatchQueue.main.async {
print("\(calculation)")
}
}
// Swift 2
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) {
var calculation = arc4random()
dispatch_async(dispatch_get_main_queue()) {
print("\(calculation)")
}
}
The gory details are messy. To keep it simple, dispatch queues manage thread pools. It is up to the dispatch queue to create, run, and eventually dispose of threads. The main queue is a special queue which has only 1 thread. The operating system is tasked with assigning threads to a processor and executing the task running on the thread.
With all that out of the way, now I will answer your questions.
Is a single thread allocated to a piece of code until that code has completed?
A task will run in a single thread.
Are the threads prioritised to whichever piece of code is run first?
Tasks are assigned to a thread. A task will not change which thread it runs on. If a task needs to run in another thread, then it creates a new task and assigns that new task to the other thread.
What is the difference between main queue and thread?
The main queue is a dispatch queue which has 1 thread. This single thread is also known as the main thread.
Can threads run at the same time?
Threads are assigned to execute on processors by the operating system. If your device has multiple processors (they all do now-a-days), then multiple threads are executing at the same time.
If so how can I specify which parts of my code should run at a selected thread?
Break you code into tasks. Dispatch the tasks on a dispatch queue.

Disabling a System.Threading.Timer instance while its callback is in progress

I am using two instances of System.Threading.Timer to fire off 2 tasks that are repeated periodically.
My question is: If the timer is disabled but at that point of time this timer is executing its callback on a thread, then will the Main method exit, or will it wait for the executing callbacks to complete?
In the code below, Method1RunCount is synchronized for read and write using lock statement ( this part of code is not shown below). The call back for timer1 increments Method1RunCount by 1 at end of each run.
static void Main(string[] args)
{
TimerCallback callback1 = Method1;
System.Threading.Timer timer1 = new System.Threading.Timer(callback1,null,0, 90000);
TimerCallback callback2 = Method2;
System.Threading.Timer timer2 = new System.Threading.Timer(callback2, null, 0, 60000);
while (true)
{
System.Threading.Thread.Sleep(250);
if (Method1RunCount == 4)
{
//DISABLE the TIMERS
timer1.Change(System.Threading.Timeout.Infinite, System.Threading.Timeout.Infinite);
timer2.Change(System.Threading.Timeout.Infinite, System.Threading.Timeout.Infinite);
break;
}
}
}
This kind of code tends to work by accident, the period of the timer is large enough to avoid the threading race on the Method1RunCount variable. Make the period smaller and there's a real danger that the main thread won't see the value "4" at all. Odds go down considerably when the processor is heavily loaded and the main thread doesn't get scheduled for while. The timer's callback can then execute more than once while the main thread is waiting for the processor. Completing missing the value getting incremented to 4. Note how the lock statement does not in fact prevent this, it isn't locked by the main thread since it is probably sleeping.
There's also no reasonable guess you can make at how often Method2 runs. Not just because it has a completely different timer period but fundamentally because it isn't synchronized to either the Method1 or the Main method execution at all.
You'd normally increment Method1RunCount at the end of Method1. That doesn't otherwise guarantee that Method1 won't be aborted. It runs on a threadpool thread, they have the Thread.IsBackground property always set to true. So the CLR will readily abort them when the main thread exits. This again tends to not cause a problem by accident.
If it is absolutely essential that Method1 executes exactly 4 times then the simple way to ensure that is to let Method1 do the counting. Calling Timer.Change() inside the method is fine. Use a class like AutoResetEvent to let the main thread know about it. Which now no longer needs the Sleep anymore. You still need a lock to ensure that Method1 cannot be re-entered while it is executing. A good way to know that you are getting thread synchronization wrong is when you see yourself using Thread.Sleep().
From the docs on System.Threading.Timer (http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx):
When a timer is no longer needed, use the Dispose method to free the
resources held by the timer. Note that callbacks can occur after the
Dispose() method overload has been called, because the timer queues
callbacks for execution by thread pool threads. You can use the
Dispose(WaitHandle) method overload to wait until all callbacks have
completed.

How does Wait/Signal (semaphore) implementation pseudo-code "work"?

Wait(semaphore sem) {
DISABLE_INTS
sem.val--
if (sem.val < 0){
add thread to sem.L
block(thread)
}
ENABLE_INTS
Signal(semaphore sem){
DISABLE_INTS
sem.val++
if (sem.val <= 0) {
th = remove next
thread from sem.L
wakeup(th)
}
ENABLE_INTS
If block(thread) stops a thread from executing, how, where, and when does it return?
Which thread enables interrupts following the Wait()?
the thread that called block() shouldn’t return until another thread has called wakeup(thread)!
but how does that other thread get to run?
where exactly does the thread switch occur?
block(thread) works that way:
Enables interrupts
Uses some kind of waiting mechanism (provided by the operating system or the busy waiting in the simplest case) to wait until the wakeup(thread) on this thread is called. This means that in this point thread yields its time to the scheduler.
Disables interrupts and returns.
Yes, UP and DOWN are mostly useful when called from different threads, but it is not impossible that you call these with one thread - if you start semaphore with a value > 0, then the same thread can entry the critical section and execute both DOWN (before) and UP (after). Value which initializes the semaphore tells how many threads can enter the critical section at once, which might be 1 (mutex) or any other positive number.
How are the threads created? That is not shown on the lecture slide, because that is only a principle how semaphore works using a pseudocode. But it is a completely different story how you use those semaphores in your application.

What are working threads?

What are this working threads? How to implement them? And when to use them. I ask this because many people mention them but I dont find an the net some example of them. Or is just a saying for creating threads? Thanks.
Working threads isn't itself a meaningful term in the thread world.
I guess you mean to say," What are worker threads" ?
In that case, let me tell you that a worker thread is commonly used to handle background tasks that the user shouldn't have to wait for to continue using your application.
e.g Recalculation and background printing.
For implementing the worker thread, the controlling function should be defined which defines the thread. When this function is entered, the thread starts, and when it exits, the thread terminates. This function should have the following prototype : More Information
UINT MyControllingFunction( LPVOID pParam );
A short snippet to implement the controlling function of worker thread,
UINT MyThreadProc( LPVOID pParam )
{
CMyObject* pObject = (CMyObject*)pParam;
if (pObject == NULL ||
!pObject->IsKindOf(RUNTIME_CLASS(CMyObject)))
return 1; // if pObject is not valid
// do something with 'pObject'
return 0; // thread completed successfully
}
// inside a different function in the program
.
.
.
pNewObject = new CMyObject;
AfxBeginThread(MyThreadProc, pNewObject);
.
.
.
"Worker thread" is a generic term for a thread which performs some task independent of some primary thread. Depending on usage, it may simply mean any thread other than the primary UI thread, or it may mean a thread which performs a well-scoped task (i.e. a 'job' rather than a continuous operation which lasts the lifetime of the application).
For example, you might spawn a worker thread to retrieve a file from a remote computer over a network. It might send progress updates the application's main thread.
I use a worker, or background thread, any time that I want to perform a lengthy task without tying up my user interface. Threads often allow me to simplify my code by making a continuous series of statements, rather than a convoluted, non-blocking architecture.

Resources