Can someone help me to understand thread enqueuing while using GCD.
I want to understand thread enqueuing which we see while putting breakpoints.
How does it work?
Do every thread executes on either main or global queue? Is it the reason of enqueuing?
Thanks,
Can someone help me to understand thread enqueuing while using GCD. I want to understand thread enqueuing which we see while putting breakpoints.
I’d suggest you think of it the other way around. You don’t “enqueue” threads. You dispatch blocks of code to a queue (i.e. “enqueue”), and the dispatch queue will select the appropriate thread on which that code shall run.
For example, above, I create a queue, dispatched a block of code to that queue, and added a breakpoint. I can see that my queue spun up a thread (it’s “Thread 3” in this case) and I can see that this was “enqueued” from the the viewDidLoad method running on the “main thread”.
Do every thread executes on either main or global queue?
Again, it’s the other way around. Code that is dispatched to a particular queue will trigger that queue to run that block of code on a particular thread.
But there are three types of queues:
the “main” queue (which runs its code on a single, special, dedicated “main” thread);
one of the various, shared “global” queues (which will select a background thread from a pool of worker threads and run the code on that thread); or
a “custom” queue that you create to a custom queue, like above.
Is it the reason of enqueuing?
This “enqueuing” is merely the process of adding a block of code to a queue. Xcode will try to show you where the code was enqueued, to help you diagnose from where the code was dispatched.
Related
This is really a question confusing me for a long time. I tried googling a lot but still don't quite understand. My question is like this:
for system calls such as epoll(), mutex and semaphore, they have one thing in common: as soon as something happens(taking mutex for example, a thread release the lock), then a thread get woken up(the thread who are waiting for the lock can be woken up).
I'm wondering how is this mechanism(an event in one thread happens, then another thread is notified about this) implemented on earth behind the scene? I can only come up with 2 ways:
Hardware level interrupt: For example, as soon as another thread releases the lock, an edge trigger will happen.
Busy waiting: busy waiting in very low level. for example, as soon as another thread releases the lock, it will change a bit from 0 to 1 so that threads who are waiting for the lock can check this bit.
I'm not sure which of my guess, if any, is correct. I guess reading linux source code can help here. But it's sort of hard to a noob like me. It will be great to have a general idea here plus some pseudo code.
Linux kernel has a built-in object class called "wait queue" (other OSes have similar mechanisms). Wait queues are created for all types of "waitable" resources, so there are quite a few of them around the kernel. When thread detects that it must wait for a resource, it joins the relevant wait queue. The process goes roughly as following:
Thread adds its control structure to the linked list associated with the desired wait queue.
Thread calls scheduler, which marks the calling thread as sleeping, removes it from "ready to run" list and stashes its context away from the CPU. The scheduler is then free to select any other thread context to load onto the CPU instead.
When the resource becomes available, another thread (be it a user/kernel thread or a task scheduled by an interrupt handler - those usually piggy back on special "work queue" threads) invokes a "wake up" call on the relevant wait queue. "Wake up" means, that scheduler shall remove one or more thread control structures from the wait queue linked list and add all those threads to the "ready to run" list, which will enable them to be scheduled in due course.
A bit more technical overview is here:
http://www.makelinux.net/ldd3/chp-6-sect-2
I have a TListView in the main Form (Thread) and many other threads that add/delete item from the list using Synchronize method. But the main thread has also a method that modify the list items and I want that method not to be interrupted by other threads that wants to execute code in the main thread. Is this possible ?
Do you have evidence that what you are worried about is happening? You shouldn't, because it can't happen. That is what Synchronize is for. Methods executing in the main thread must complete before the main thread can service the message queue to process work items dispatched via Synchronize from worker threads so you have nothing to worry about.
When a worker thread uses Synchronize it essentially just posts a message to the main thread telling it that it has work for it to do. If the main thread is busy executing another method then the worker thread will simply block until the main thread is finished, subsequently processes the message queue, picks up the work item, executes it, and then posts back to the worker thread that the work is complete (leaving the worker thread free to then continue).
This, of course, assuming that the method in your main thread is not calling Application.ProcessMessages() or CheckSynchronize() (or you are using a tricky component that does this, or something similar, without you knowing it -> see : Delphi 7, Windows 7, event handler, re-entrent code)
I have a job in pthread which prepares a data set for plotting. Then I need to display this data in a main window like a graph. How can I transfer the data set form the thread to the rendering widget which is in the main window.
I use slots and signals. What happens when my thread emits signal more frequently than the slot could receive it.
The problem is that I use QMap* to transfer the data set form one thread to another. And I need to be confident that slot finished its job and I can update this map in the job thread.
Firstly, I assume you mean you have a job in 'QThread', not pthread (as in posix thread). In that case, you're right to use signals and slots to pass the data to the main thread for rendering.
How frequent is 'more frequently than the slot could receive it'? Have you tried it and are you having problems, or just speculating about something that you think may go wrong? If you are actually having a problem with sending too many signals, then batch up the data on the processing thread and send the batch periodically on a timer.
As for ensuring the slot has finished its job, you can use QMutex to control the access to the QMap in each thread. The Qt help for QMutex clearly explains its usage; lock the mutex, do the work and then unlock.
I'm using logback implementation and created an AsyncAppender so as to use logging inside a thread.
The thread will be something like a monitor: it consumes a BlockingQueue of Objects added from other threads and while the queue is not empty, and there is no blocking signal it logs the content of the queue. At the same time, the queue is filled by a few threads.
When the threads get the stopping signal from the coordinator, they interrupt, so they don't add more content in the queue.
The monitor queue interrupts once there is blocking signal (the producer threads already interrupted) and the BlockingQueue is empty.
There are two problems with the logging of the monitor thread:
After the producers have interrupted, the queue becomes empty so the monitor thread interrupts as well immediately, without showing all of the contents of the queue, even when it removed everything from the queue.
The order of the shown messages (both in console appender and file appender) is not the same as they have been inserted in the queue
I tried 3 different approaches: creating a static logger inside the thread, creating a non-static one and providing a loader from the class where the monitor thread is created.
In case I do everything in a while(true){} loop in the monitor thread, everything is shown but not in the right order, plus the fact that I have to find out how to interrupt the thread.
I checked also the case of the MDC but my problem is somehow different: I have to consume the product of the producers and do it while they are producing, plus after they finished in case there is still stuff in the queue.
I checked also the LoggerContext inside the thread and it's started is false. Shouldn't it be true;
Any idea on how can show all the content before interrupting the thread and show it in the right order, would be valuable.
Thanks.
I just implemented a thread pool like described here
Allen Bauer on thread pools
Very simple implementation, works fine, but my application no longer shuts down. Seems that two worker threads (and one other thread, I guess the queuing thread) stuck in the function
ntdll.ZwRemoveIoCompletion
I remember to have read something about IO completions in the help entry for QueueUserWorkItem (the WinAPI function used in the thread pool implementation), but I couldn't understand it properly. I used WT_EXECUTELONGFUNCTION for my worker threads since execution can take a while and I want a new worker thread created instead of waiting for the existing ones to finish. Some of the tasks assigned to the worker threads perform some I/O stuff. I tried to use WT_EXECUTEINIOTHREAD but it does not seem to help.
I should mention that the main thread waits for entry to a critical section witht the call stack being
System.Halt0, System.FinalizeUnits, Classes.Finalization, TThread.Destroy,
RtlEnterCriticalSection, RtlpWaitForCriticalSection
Any ideas what I'm doing wrong here? Thanks for your help in advance.
To make sure the worker threads shut down, you need to have some way of waking them up if they are waiting on the empty IO completion port. The simplest way would seem to be to post a NULL message of some kind to the port - they should then treat this as a signal to halt in an orderly fashion.
You must leave from the critical section before you can enter again. So the problem is inside a lock.
In some thread:
EnterCriticalSection(SomeCriticalSection);
sort code...
LeaveCriticalSection(SomeCriticalSection);
In some other thread:
EnterCriticalSection(SomeCriticalSection);
clean up code...
LeaveCriticalSection(SomeCriticalSection);
If the sort code is running in the first thread and the second thread try to run the clean up code the second thread will wait until the sort code finish and you leave the critical section. Only after leaving the critical section you can enter the same critical section. I hope this will help you narrow down the deadlock code because it is inside a critical section.
To get the completion port handle you can save it's handle when you create the completion port:
FIoCPHandle := CreateIoCompletionPort(INVALID_HANDLE_VALUE, 0 , 0, FNumberOfConcurrentThreads);
When using QueueUserWorkItem, as long as the worker threads have been returned to the thread pool, you should not have to do anything to shut them down. The WT_EXECUTEDEFAULT component of the thread pool queues the work items up onto an I/O completion port. This port is part of the thread pool's internal implementation and is not accessible to you.
Could you provide some more detailed call stacks for the threads that appear to be stuck? It would make this problem much easier to diagnose.