how to access a worker thread from UI thread? - multithreading

I have a working thread running all along the runtime, who generates events.
I can handle those events inside the UI thread by using disp = Windows::UI::Core::CoreWindow::GetForCurrentThread()->Dispatcher.
more precisely, I do the modifications to the UI by using disp->RunAsync(...) anywhere inside the working thread.
but I don't know how to do the inverted operation. I want to have some Async function inside the UI thread to perform operation (on some std::unique_ptr) in the working thread when I click on some button.

If I understand correctly you want to be able to run an async operation when a button is clicked, but on a specific thread to which you refer as your worker thread.
First - Since you want to use a resource in 2 threads you should not use unique_ptr and use shared_ptr since you share this resource between the two threads.
Second - if you don't necessarily have to run the action on a specific thread then you can simply use Windows::System::Threading::ThreadPool::RunAsync and capture the shared_ptr by value.
e.g:
namespace WST = Windows::System::Threading;
WST::ThreadPool::RunAsync(
ref new WST::WorkItemHandler(
[mySharedPtr](Windows::Foundation::IAsyncAction^ operation)
{
mySharedPtr->Foo();
}));
In case you have to run the operation on a specific thread then I assume you want to be able to append operations to an already running thread, otherwise you are creating a thread and you can use the above example.
So in order to append operations to an already running thread, that thread must have the functionality of getting a new operations and then running those operations in a synchronous order. This functionality is basically what the Dispatcher provides. This is what an Event Loop is, also called: message dispatcher, message loop, message pump, or run loop. Also you can find information by reading on the Recator\Proactor design pattern.
This CodeProject page shows one way of implementing the pattern, and you can use Winrt component to make it better \ more conveniant \ more familiar

Related

How to sync Delphi event while running DB operations in a background thread?

Using Delphi 7 & UIB, I'm running database operations in a background thread to eliminate problems like:
Timeout
Priority
Immediate Force-reconnect after network-loss
Non-blocked UI
Keeping an opened DB connection alive
User canceling
I've read ALL related topics here, and realized: using while isMyThreadStillRuning and not UserCanceled do sleep(100); end; isn't the recommended way to do this, but rather using TEvent.WaitFor(3000)....
The solutions here are either about sending signals FROM or TO... the thread, or doing it with messages, but never both ways.
Reading the help file, I've also found TSimpleEvent, which seems to be easier to use.
So what is the recommended way to communicate between Main-UI + DB-Thread in both ways?
Should I simply create 2+2 TSimpleEvent?
to start a new transaction (thread should stop sleeping)
force-STOP execution
to signal back if it's moved to a new stage (transaction started / executed / commited=done)
to signal back if there is any error happened
or should there be only 1 TEvent?
Update 2:
First tests show:
2x TSimpleEvent is enough (1 for Thread + 1 for Gui)
Both created as public properties of the background thread
Force-terminating the thread does not work. (Too many errors impossible to handle..)
Better to set a variable like (Stop_yourself) and let it cancel and free itself, (while creating a new instance from the same class and try again.)
(still work in progress...)
You should move the query to a TThread. Unfortunately, anonymous threads are not available in D7 so you need to write your own TThread derived class. Inside, you need its own DB connection to prevent shared resources. From the caller method, you can wait for the thread to end. The results should be stored somewhere in the caller class. Ensure that the access to parameters of the query and for storing the result of the query is handled thread-safe by using a TMutex or TMonitor.

NSURLSession dataTaskWithURL

I am using NSURLSession dataTaskWithURL:completionHandler. It looks like completionHandler is executed in a thread which is different than the thread(in my case, it's the main thread) which calls dataTaskWithURL. So my question is, since it is asynchronized, is it possible that the main thread exit, but the completionHandler thread is still running since the response has not come back, which is the case I am trying to avoid. If this could happen, how should I solve the problem? BTW, I am building this as a framework, not an application.Thanks.
In the first part of your question you seem un-sure that the completion handler is running on a different thread. To confirm this let's look at the NSURLSession Class Reference. If we look at the "Creating a Session" section we can see in the description for the following method the answer.
+ sessionWithConfiguration:delegate:delegateQueue:
Swift
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)
Objective-C
+ (NSURLSession *)sessionWithConfiguration:(NSURLSessionConfiguration *)configuration
delegate:(id<NSURLSessionDelegate>)delegate
delegateQueue:(NSOperationQueue *)queue
In the parameters table for the NSOperationQueue queue parameter is the following quote.
An operation queue for scheduling the delegate calls and completion handlers. The queue need not be a serial queue. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.
So we can see the default behavior is to provide a queue whether from the developer or as the default class behavior. Again we can see this in the comments for the method + sessionWithConfiguration:
Discussion
Calling this method is equivalent to calling
sessionWithConfiguration:delegate:delegateQueue: with a nil delegate
and queue.
If you would like a more information you should read Apple's Concurrency Programming Guide. This is also useful in understanding Apple's approach to threading in general.
So the completion handler from - dataTaskWithURL:completionHandler: is running on a different queue, with queues normally providing their own thread(s). This leads the main component of your question. Can the main thread exit, while the completion handler is still running?
The concise answer is no, but why?
To answer this answer this we again turn to Apple's documentation, to a document that everyone should read early in their app developer career!
The App Programming Guide
The Main Run Loop
An app’s main run loop processes all user-related events. The
UIApplication object sets up the main run loop at launch time and uses
it to process events and handle updates to view-based interfaces. As
the name suggests, the main run loop executes on the app’s main
thread. This behavior ensures that user-related events are processed
serially in the order in which they were received.
All of the user interact happens on the main thread - no main thread, no main run loop, no app! So the possible condition you question mentions should never exist!
Apple seems more concerned with you doing background work on the main thread. Checkout the section "Move Work off the Main Thread"...
Be sure to limit the type of work you do on the main thread of your
app. The main thread is where your app handles touch events and other
user input. To ensure that your app is always responsive to the user,
you should never use the main thread to perform long-running or
potentially unbounded tasks, such as tasks that access the network.
Instead, you should always move those tasks onto background threads.
The preferred way to do so is to use Grand Central Dispatch (GCD) or
NSOperation objects to perform tasks asynchronously.
I know this answer is long winded, but I felt the need to offer insight and detail in answering your question - "the why" is just as important and it was good review :)
NSURLSessionTasks always run in background by default that's why we have completion handler which can be used when we get response from Web service.
If you don't get any response explore your request URL and whether HTTPHeaderFields are set properly.
Paste your code so that we can help it
I just asked the same question. Then figured out the answer. The thread of the completion handler is setup in the init of the NSURLSession.
From the documentation:
init(configuration configuration: NSURLSessionConfiguration,
delegate delegate: NSURLSessionDelegate?,
delegateQueue queue: NSOperationQueue?)`
queue - A queue for scheduling the delegate calls and completion handlers. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.*
My code that sets up for completion on main thread:
var session = NSURLSession(configuration: configuration, delegate:nil, delegateQueue:NSOperationQueue.mainQueue())
(Shown in Swift, Objective-C the same) Maybe post more code if this does not solve.

ListenableFuture running on different thread

I am using Futures.transform and I want my ListenableFuture to run on separate thread. Is it possible to that? I see ListenableFuture has sameThreadExecutor option, is there a option to run in different thread?
Details:
I have single thread that read data from the network using some async mechanism, depending on the requests it gets it has to dispatch the request to another thread so that this thread goes back to listen more requests. I am trying to use Futures.transform to do it.
You need an Executor that is not the sameThreadExecutor. The most common I've seen is the cached thread pool option, but if you're doing something simple with only one other thread, you might try this:
final ListeningExecutorService executor =
MoreExecutors.listeningDecorator(Executors.newSingleThreadExecutor());
If you're looking to run a single task in another thread, calling executor.submit() will then run your Callable on another thread. If instead you're trying to do the transformation of the future in another thread, you can pass this executor into Futures.transform.

How to do asynchronuous programming in Delphi?

I have an application, where most of the actions take some time and I want to keep the GUI responsive at all times. The basic pattern of any action triggered by the user is as follows:
prepare the action (in the main thread)
execute the action (in a background thread while keeping the gui responsive)
display the results (in the main thread)
I tried several things to accomplish this but all of them are causing problems in the long run (seemingly random access violations in certain situations).
Prepare the action, then invoke a background thread and at the end of the background thread, use Synchronize to call an OnFinish event in the main thread.
Prepare the action, then invoke a background thread and at the end of the background thread, use PostMessage to inform the GUI thread that the results are ready.
Prepare the action, then invoke a background thread, then busy-wait (while calling Application.ProcessMessages) until the background thread is finished, then proceed with displaying the results.
I cannot come up with another alternative and none of this worked perfectly for me. What is the preferred way to do this?
1) Is the 'Orignal Delphi' way, forces the background thread to wait until the synchronized method has been executed and exposes the system to more deadlock-potential than I am happy with. TThread.Synchronize has been re-written at least twice. I used it once, on D3, and had problems. I looked at how it worked. I never used it again.
2) I the design I use most often. I use app-lifetime threads, (or thread pools), create inter-thread comms objects and queue them to background threads using a producer-consumer queue based on a TObjectQueue descendant. The background thread/s operate on the data/methods of the object, store results in the object and, when complete, PostMessage() the object, (cast to lParam) back to the main thread for GUI display of results in a message-handler, (cast the lParam back again). The background threads in the main GUI thread then never have to operate on the same object and never have to directly access any fields of each other.
I use a hidden window of the GUI thread, (created with RegisterWindowClass and CreateWindow), for the background threads to PostMessage to, comms object in LParam and 'target' TwinControl, (usually a TForm class), as WParam. The trivial wndproc for the hidden window just uses TwinControl.Perform() to pass on the LParam to a message-handler of the form. This is safer than PostMessaging the object directly to a TForm.handle - the handle can, unfortunately, change if the window is recreated. The hidden window never calls RecreateWindow() and so its handle never changes.
Producer-consumer queues 'out from GUI', inter-thread comms classes/objects and PostMessage() 'in to GUI' WILL work well - I've been doing it for decades.
Re-using the comms objects is fairly easy too - just create a load in a loop at startup, (preferably in an initialization section so that the comms objects outlive all forms), and push them onto a P-C queue - that's your pool. It's easier if the comms class has a private field for the pool instance - the 'releaseBackToPool' method then needs no parameters and, if there is more than one pool, ensures that the objects are always released back to their own pool.
3) Can't really improve on David Hefferman's comment. Just don't do it.
You can implement the pattern questioned by using OTL as demonstrated by the OTL author here
You could communicate data between threads as messages.
Thread1:
allocate memory for a data structure
fill it in
send a message to Thread2 with the pointer to this structure (you could either use Windows messages or implement a queue, insuring its enque and dequeue methods don't have race conditions)
possibly receive a response message from Thread2...
Thread2:
receive the message with the pointer to the data structure from Thread1
consume the data
deallocate the data structure's memory
possibly send a message back to Thread1 in a similar fashion (perhaps reusing the data structure, but then you don't deallocate it)
You may end up with more than 1 non-GUI thread if you want your GUI not only live, but also responding to some input, while the input that takes long time to be processed is being processed.

Worker thread doesn't have message loop (MFC, windows). Can we make it to receive messages?

Mfc provides both worker and UI thread. UI thread is enabled with message receiving capabilities (send, post). Could it be possible to let worker thread too receive messages.
Call CWinThread::PumpMessage() repeatedly until it returns a WM_QUIT message.
It seems you need a thread, that can handle multiple messages from another threads. Another threads would add-a-message to the message-queue of this thread. Well, in that case you may use PeekMessage to startup a loop, which would eventually create a hidden window, and then use GetMessage to get the messages. The other threads would use PostThreadMessage with the thread ID (the one having Peek/GetMessage), and the message-code, LPARAM, WPARAM.
It would be like (not syntactically correct):
TheProcessor()
{
MSG msg;
PeekMessage(&msg,...);
while(GetMessage(&msg...)
{ /* switch case here */ }
}
The threads would call PostThreadMessage - See MSDN for more info.
When you need to send more data than LPARAM/WPARAM can hold, you eventually need to allocate them on heap, and then delete AFTER processing the message in your custom message-loop. This would be cumbersome and buggy.
But... I would suggest you to have your own class, on top of std::queue/deque or other DS, where you can add AddMessage/PushMessage, and PopMessage (or whatever names you like). You need to use SetEvent, WaitForSingleObject to trigger the new message in loop (See one of the implementation here. You may make it generic for one data-type, or make it template class - that would support any data-type (your underlying DS (queue) would utilize the same data-type). You also need not to worry about heaps and deletions. This is less error prone. You may however, have to handle MT issues.
Using Windows events involves kernel mode transition (since events are named/kernel objects), and you may like to use Conditional Variables which are user objects.Or you may straightaway use unbounded_buffer class from Concurrency Runtime Library available in VC10. See this article (jump to unbounded_buffer).
Yes you can create a message queue on a worker thread. You will need to run a message pump on that thread.

Resources