According to the Qt manual on QThread the quit() function waits for the current task to terminate and then ends the event loop.
After having called quit() and wait() for proper termination,
is it legal to start the same instance of the QThread again using start().
The code seems to be working all right but after a restart the Thread ID changes.
There is no mention of this in the docs and all examples seem to create a new Thread instance or don't call quit, so I was wondering.
Yes, it's legal to start a thread again if it was properly stopped.
And this is what the doc says about thread id (emphasis mine):
Qt::HANDLE QThread::currentThreadId()
Returns the thread handle of the currently executing thread.
Warning: The handle returned by this function is used for internal
purposes and should not be used in any application code.
So you should not care about thread id change.
Related
I have the same situation like this: stop thread started by qtconcurrent::run
I need to close child thread (started with QtConcurrent::run) on closeEvent in QMainWindow.
But my function in child thread use code from *.dll: I can`t use loop because all that I do - is calling the external dll like
QFuture<void> = QtConcurrent::run(obj->useDllfunc_with_longTermJob());
And when I close the app with x-button my gui is closed, but second thread with_longTermJob() still worked and when is finished I have an error.
I know some decisions for this:
using other functions like map() or something else with
QFuture.cancel/stop functionality, not QtConcurrent::run().But I need only one function call. run() is what I need.
or use QThread instead Concurrent.But it`s not good for me.
What method more simple and better and how can I implement this? Is there a method that I don`t listed?
Could you provide small code sample for decision. Thx!
QtConcurrent::run isn't a problem here. You must have means of stopping the dllFuncWithLongTermJob. If you don't have such means, then the API you're using is broken, and you're out of luck. There's nothing you can do that'd be generally safe. Forcibly terminating a thread can leave the heap in an inconsistent state, etc. - if you need to terminate a thread, you need to immediately abort the application.
Hopefully, you can call something like stopLongTermJob that sets some flag that interrupts the dllFuncWithLongTermJob.
Then:
auto obj = new Worker;
auto objFuture = QtConcurrent::run([=]{obj->dllFuncWithLongTermJob();});
To interrupt:
obj->stopLongTermJob(); // must be thread-safe, sets a flag
objFuture.waitForFinished();
I am using NSURLSession dataTaskWithURL:completionHandler. It looks like completionHandler is executed in a thread which is different than the thread(in my case, it's the main thread) which calls dataTaskWithURL. So my question is, since it is asynchronized, is it possible that the main thread exit, but the completionHandler thread is still running since the response has not come back, which is the case I am trying to avoid. If this could happen, how should I solve the problem? BTW, I am building this as a framework, not an application.Thanks.
In the first part of your question you seem un-sure that the completion handler is running on a different thread. To confirm this let's look at the NSURLSession Class Reference. If we look at the "Creating a Session" section we can see in the description for the following method the answer.
+ sessionWithConfiguration:delegate:delegateQueue:
Swift
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)
Objective-C
+ (NSURLSession *)sessionWithConfiguration:(NSURLSessionConfiguration *)configuration
delegate:(id<NSURLSessionDelegate>)delegate
delegateQueue:(NSOperationQueue *)queue
In the parameters table for the NSOperationQueue queue parameter is the following quote.
An operation queue for scheduling the delegate calls and completion handlers. The queue need not be a serial queue. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.
So we can see the default behavior is to provide a queue whether from the developer or as the default class behavior. Again we can see this in the comments for the method + sessionWithConfiguration:
Discussion
Calling this method is equivalent to calling
sessionWithConfiguration:delegate:delegateQueue: with a nil delegate
and queue.
If you would like a more information you should read Apple's Concurrency Programming Guide. This is also useful in understanding Apple's approach to threading in general.
So the completion handler from - dataTaskWithURL:completionHandler: is running on a different queue, with queues normally providing their own thread(s). This leads the main component of your question. Can the main thread exit, while the completion handler is still running?
The concise answer is no, but why?
To answer this answer this we again turn to Apple's documentation, to a document that everyone should read early in their app developer career!
The App Programming Guide
The Main Run Loop
An app’s main run loop processes all user-related events. The
UIApplication object sets up the main run loop at launch time and uses
it to process events and handle updates to view-based interfaces. As
the name suggests, the main run loop executes on the app’s main
thread. This behavior ensures that user-related events are processed
serially in the order in which they were received.
All of the user interact happens on the main thread - no main thread, no main run loop, no app! So the possible condition you question mentions should never exist!
Apple seems more concerned with you doing background work on the main thread. Checkout the section "Move Work off the Main Thread"...
Be sure to limit the type of work you do on the main thread of your
app. The main thread is where your app handles touch events and other
user input. To ensure that your app is always responsive to the user,
you should never use the main thread to perform long-running or
potentially unbounded tasks, such as tasks that access the network.
Instead, you should always move those tasks onto background threads.
The preferred way to do so is to use Grand Central Dispatch (GCD) or
NSOperation objects to perform tasks asynchronously.
I know this answer is long winded, but I felt the need to offer insight and detail in answering your question - "the why" is just as important and it was good review :)
NSURLSessionTasks always run in background by default that's why we have completion handler which can be used when we get response from Web service.
If you don't get any response explore your request URL and whether HTTPHeaderFields are set properly.
Paste your code so that we can help it
I just asked the same question. Then figured out the answer. The thread of the completion handler is setup in the init of the NSURLSession.
From the documentation:
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)`
queue - A queue for scheduling the delegate calls and completion handlers. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.*
My code that sets up for completion on main thread:
var session = NSURLSession(configuration: configuration, delegate:nil, delegateQueue:NSOperationQueue.mainQueue())
(Shown in Swift, Objective-C the same) Maybe post more code if this does not solve.
I have a form that is responsible for creating and setting up an instance of an object, and then telling the object to go do its work. The process is a long one, so there's an area on the form where status messages appears to let the user know something is happening. Messages are set with a setMessage(string msg) function. To allow the form to remain responsive to events, I create a new thread for the object to run in, and pass it the setMessage function as a delegate to allow the object to set status messages on the form. This part is working properly. The main form is responsive and messages posted to its setMessage function appear as expected.
Because the process is a long one, and is made up of many steps, I want to allow the user to terminate the process before it's finished. To do this I created a volatile bool called _stopRequested and a function called shouldStop() that returns its value. This is also given to the object as a delegate. The object can tell if it should terminate by checking shouldStop() periodically, and if it's true, shut down gracefully.
Lastly, Windows controls are not thread safe, so the compiler will complain if a thread other than the one that created the control tries to manipulate it. Therefore, the setMessage function is wrapped in an if statement that tests for this and invokes the function using the parent thread if it's being called from the worker thread (see http://msdn.microsoft.com/en-us/library/ms171728(v=vs.80).aspx for a description).
The problem arises when the user requests a shutdown. The main form sets _stopRequested to true and then waits for the child thread to finish before closing the application. It does this by executing _child.Join(). Now the parent thread (the one running the form) is in a Join state and can't do anything. The child thread (running the long process) detects the stop flag and attempts to shut down, but before it does, it posts a status message by calling it's setMessage delegate. That delegate points back to the main form, which figures out that the thread setting the message (child) is different than the thread that created the control (parent) and invokes the function in the parent thread. The parent thread is, of course, in a Join state and won't set the text on the text box until the child thread terminates. The child thread won't terminate because it's waiting for the delegate it called to return. Instant deadlock.
I've found examples of signaling a thread to terminate, and I've found examples of child threads sending messages to the parent thread, but I can't find any examples of both things happening at the same time. Can someone give me some pointers on how to avoid this deadlock? Specifically, I'd like the form to wait until the child thread terminates before closing the application but remain able to do work while it waits.
Thanks in advance for the advice.
1-(lazy) Dispatch the method from a new Thread so it doesn't lock
2-(re-think) The main UI thread should be able to control the child thread, so forget the _stopRequested and shouldStop() and implement a childThread.Abort() , abort does not kill the thread, but sends a ThreadAbortException
which can be handled or even canceled
catch(ThreadAbortException e)
{
ReleaseResources();
}
Make the ReleaseResources safe by making various checks such as:
resource != null
or
resource.IsClosed()
The ReleaseResources should be called normally without abort and also by abort.
3-(if possible)stop the child, via main thread call ReleaseResources()
You may have to implement a mix of these.
I create a process using CreateProcess() with the CREATE_SUSPENDED and then go ahead to create a little patch of code inside the remote process to load a DLL and call a function (exported by that DLL), using VirtualAllocEx() (with ..., MEM_RESERVE | MEM_COMMIT, PAGE_EXECUTE_READWRITE), WriteProcessMemory(), then call FlushInstructionCache() on that patch of memory with the code.
After that I call CreateRemoteThread() to invoke that code, creating me a hRemoteThread. I have verified that the remote code works as intended. Note: this code simply returns, it does not call any APIs other than LoadLibrary() and GetProcAddress(), followed by calling the exported stub function that currently simply returns a value that will then get passed on as the exit status of the thread.
Now comes the peculiar observation: remember that the PROCESS_INFORMATION::hThread is still suspended. When I simply ignore hRemoteThread's exit code and also don't wait for it to exit, all goes "fine". The routine that calls CreateRemoteThread() returns and PROCESS_INFORMATION::hThread gets resumed and the (remote) program actually gets to run.
However, if I call WaitForSingleObject(hRemoteThread, INFINITE) or do the following (which has the same effect):
DWORD exitCode = STILL_ACTIVE;
while(STILL_ACTIVE == exitCode)
{
Sleep(500);
if(!GetExitCodeThread(hRemoteThread, &exitCode))
break;
}
followed by CloseHandle() this leads to hRemoteThread finishing before PROCESS_INFORMATION::hThread gets resumed and the process simply "disappears". It is enough to allow hRemoteThread to finish somehow without PROCESS_INFORMATION::hThread to cause the process to die.
This looks suspiciously like a race condition, since under certain circumstances hRemoteThread may still be faster and the process would likely still "disappear", even if I leave the code as is.
Does that imply that the first thread that gets to run within a process becomes automatically the primary thread and that there are special rules for that primary thread?
I was always under the impression that a process finishes when its last thread dies, not when a particular thread dies.
Also note: there is no call to ExitProcess() involved here in any way, because hRemoteThread simply returns and PROCESS_INFORMATION::hThread is still suspended when I wait for hRemoteThread to return.
This happens on Windows XP SP3, 32bit.
Edit: I have just tried Sysinternals Process Monitor to see what's happening and I could verify my observations from before. The injected code does not crash or anything, instead I get to see that if I don't wait for the thread it doesn't exit before I close the program where the code got injected. I'm thinking whether the call to CloseHandle(hRemoteThread) should be postponed or something ...
Edit+1: it's not CloseHandle(). If I leave that out just for a test, the behavior doesn't change when waiting for the thread to finish.
The first thread to run isn't special.
For example, create a console app which creates a suspended thread and terminates the original thread (by calling ExitThread). This process never terminates (on Windows 7 anyway).
Or make the new thread wait for five seconds then exit. As expected, the process will live for five seconds and exit when the secondary thread terminates.
I don't know what's happening with your example. The easiest way to avoid the race is to make the new thread resume the original thread.
Speculating now, I do wonder if what you're doing isn't likely to cause problems anyway. For example, what happens to all the DllMain calls for the implicitly loaded DLLs? Are they unexpectedly happening on the wrong thread, are they being skipped, or are they postponed until after your code has run and the main thread starts?
Odds are good that the thread with the main (or equivalent) function calls ExitProcess (either explicitly or in its runtime library). ExitProcess, well, exits the entire process, including killing all threads. Since the main thread doesn't know about your injected code, it doesn't wait for it to finish.
I don't know that there's a good way to make the main thread wait for yours to complete...
At the moment, I am using WaitForSingleObject to wait for a sub-task thread to complete. Unfortunately, this causes my GUI to lock up. What I would like to do instead, is set a handler (in the GUI thread) that will be called after the sub-task thread is complete. Is there another function for this?
What you can do is to let the last thing that your thread does be posting a custom message to your window. Then handle that as a regular message using MFC's message map. If you cannot change the thread code, you can create a new thread that waits for your thread and then sends the message.
As you already noticed, it is not a good idea to lock up the GUI thread...
Edit: Posting the message is done using the PostMessage function as pointed out by Hans in the comments.
Could also have a look at MsgWaitForMultipleObjects (or MsgWaitForMultipleObjectsEx).
These allow a thread to wait for event handles and service windows messages (examine the return value to see what causes the call to return). Examples of usage should be available via a goodle search.
http://msdn.microsoft.com/en-us/library/ms684245(VS.85).aspx