run a block on main thread immediately after NSBlockOperation on background thread - multithreading

In my project I run an operation on a background thread using NSBlockOperation:
var operationQueue = NSOperationQueue()
var iop = NSBlockOperation(block: { self.reloadSize() /*calculation...*/ })
operationQueue.addOperation(iop)
Immediately after the calculations in the background thread are completed, I need to call: table.reloadData() on an NSTableView. I would do that in the very same thread, however, due to auto layout issues, the table has to be reloaded on the main thread. How can I accomplish this asynchronous relationship across both threads?

Two possible approaches:
Dispatch the reloading of the table from inside the block:
let operationQueue = NSOperationQueue()
let operation = NSBlockOperation() {
self.reloadSize()
...
dispatch_async(dispatch_get_main_queue()) { // or you can use NSOperationQueue.mainQueue().addOperationWithBlock()
self.table.reloadData()
}
}
operationQueue.addOperation(operation)
or just use addOperationWithBlock:
let operationQueue = NSOperationQueue()
operationQueue.addOperationWithBlock() {
self.reloadSize()
...
dispatch_async(dispatch_get_main_queue()) { // or you can use NSOperationQueue.mainQueue().addOperationWithBlock()
self.table.reloadData()
}
}
Create a new operation dependent upon this one:
let operationQueue = NSOperationQueue()
let operation = NSBlockOperation() {
self.reloadSize()
...
}
let completionOperation = NSBlockOperation() {
self.table.reloadData()
}
completionOperation.addDependency(operation)
operationQueue.addOperation(operation)
NSOperationQueue.mainQueue().addOperation(completionOperation)
Personally, I'd generally lean towards the first approach, though the latter approach can be useful in more complicated scenarios (e.g. the completion operation is dependent upon a number of other operations).

Try calling CFRunLoopRun().
It should run in the current queue.
If your operation ran on main queue, the current queue would be main queue and the operation would run on it succesfully

Related

How to terminate a blocking tokio task?

In my application I have a blocking task that synchronically reads messages from a queue and feeds them to a running task.
All of this works fine, but the problem that I'm having is that the process does not terminate correctly, since the queue_reader task does not stop.
I've constructed a small example based on the tokio documentation at: https://docs.rs/tokio/1.20.1/tokio/task/fn.spawn_blocking.html
use tokio::sync::mpsc;
use tokio::task;
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
loop {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
println!("Finalizing thread");
queue_reader.abort(); // This doesn't seem to terminate the queue_reader task
queue_reader.await.unwrap(); // <-- The process hangs on this task.
println!("Done");
}
At first I expected that queue_reader.abort() should terminate the task, however it doesn't. My expectation is that tokio can only do this for tasks that use .await internally, because that will handle control over to tokio. Is this right?
In order to terminate the queue_reader task I introduced a oneshot channel, over which I signal the termination, as shown in the next snippet.
use tokio::task;
use tokio::sync::{oneshot, mpsc};
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// A new channel to communicate when the process must finish.
let (term_tx, mut term_rx) = oneshot::channel();
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
// As long as termination is not signalled
while term_rx.try_recv().is_err() {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
// Signal termination
term_tx.send(()).unwrap();
println!("Finalizing thread");
queue_reader.await.unwrap();
println!("Done");
}
My question is, is this the canonical/best way to do this, or are there better alternatives?
Tokio cannot terminate CPU-bound/blocking tasks.
It is technically possible to kill OS threads, but generally it is not a good idea, as it's expensive to create new threads and it can leave your program in an invalid state. Even if Tokio decided this was something worth implementing, it would serverely limit its implementation - it would be forced into a multithread model, just to support the possibility that you'd want to kill a blocking task before it's finished.
Your solution is pretty good; give your blocking task the responsibility for terminating itself and provide a way to tell it to do so. If this future was part of a library, you could abstract the mechanism away by returning a "handle" to the task that had a cancel() method.
Are there better alternatives? Maybe, but that would depend on other factors. Your solution is good and easily extended, for example if you later needed to send different types of signal to the task.

Transfer Coroutine Dispatcher's Tasks to another Dispatcher

My question is simple, given Dispatcher 1, how would you transfer Dispatcher 1's tasks to another Dispatcher named Dispatcher 2?
Not sure what transfer would mean but yes you can jump between threads. You can use withContext within a coroutine to switch between threads. Like so:
val customContext = newSingleThreadContext("CustomContext")
runBlocking(Dispatchers.Default) {
// Started in DefaultDispatcher
withContext(customContext) {
// Working in CustomContext
}
// Back to DefaultDispatcher
}
runBlocking(Dispatchers.Unconfined) {
// Started in main thread
withContext(Dispatchers.Default) {
// Working in DefaultDispatcher
}
// Back to main thread
}

Swift - how can I wait for dispatch_async finish?

When my app starts first time I perform task of importing data from disk into CoreData. I do thins in background thread. Then I switch to main thread and perform load from CoreData.
Problem is that sometimes load from CoreData occurs before import from disk is finished. So I need a way to wait for import to finish and only them perform load from db.
How can I do this in Swift?
My code looks like this:
func firstTimeLaunch() {
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0)) { [unowned self] in
self.importArticlesListFromDisk()
self.importArticlesFromDisk()
dispatch_async(dispatch_get_main_queue()) { [unowned self] in
self.loadArticlesListFromDb()
self.loadArticlesFromDb()
}
}
}
Perhaps you should try adding a completion handler to importArticlesListFromDisk and importArticlesFromDisk, then loading from the db in the completion block.
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0)) { [unowned self] in
self.importArticlesAndArticlesListFromDisk() {
// Completion Handler
dispatch_async(dispatch_get_main_queue()) { [unowned self] in
self.loadArticlesListFromDb()
self.loadArticlesFromDb()
}
}
}
I'd recommend using NSOperations. There is a great talk about this from wwdc15
The sample code is also quite interesting for that purpose.
Essentially, you want to create a concurrent operation for your each of your imports:
let's imagine we override the start function of an operation importing your article list from disk:
override func start {
//long running import operation, even async...
//when done: self.finish() //needs kvo overrides
//finish causes the concurrent operation to terminate
}
A very nice thing you can do with operations, is to set dependencies:
let importArticlesFromDiskOp = ...
let importArticlesFromDBOp = ...
importArticlesFromDBOp.addDependency(importArticlesFromDiskOp)
This way your import from DB would only run after the import from disk is done. I personally use this a LOT.
good luck
R

Trouble understanding NSOperation & NSOperationQueue (swift)

I'm having trouble understanding how to create a synchronous NSOperationQueue.
I've created a prototype that basically says:
Create 4 operations that very long or very short to complete
Regardless of time to complete, they should finish in the order they are created in the queue.
My NSOperation class is very simple:
class LGOperation : NSOperation
{
private var operation: () -> ()
init(operation: () -> ())
{
self.operation = operation
}
override func main()
{
if self.cancelled {
return
}
operation()
}
}
And my test class is also quite simple:
class LGOperationTest
{
class func downloadImage(url: String)
{
// This is a simple AFHTTPRequestOperation for the image
LGImageHelper.downloadImageWithUrl(url, complete: { (image: AnyObject?) in
println("downloaded \(url)")
})
}
class func test()
{
var queue = NSOperationQueue.mainQueue()
queue.maxConcurrentOperationCount = 1
var op1 = LGOperation(operation: { self.downloadImage("http://www.toysrus.com/graphics/tru_prod_images/Animal-Planet-T-Rex---Grey--pTRU1-2909995dt.jpg") })
var op2 = LGOperation(operation: { println("OPERATION 2") })
var op3 = LGOperation(operation: { self.downloadImage("http://www.badassoftheweek.com/trex.jpg") })
var op4 = LGOperation(operation: { println("OPERATION 3") })
var ops: [NSOperation] = [op1, op2, op3, op4]
op2.addDependency(op1)
op3.addDependency(op2)
op4.addDependency(op3)
op4.completionBlock = {
println("finished op 4")
}
queue.addOperation(op1)
queue.addOperation(op2)
queue.addOperation(op3)
queue.addOperation(op4)
println("DONE")
}
}
So I would expect here is for the operations to finish in order, instead the output is:
DONE
OPERATION 2
OPERATION 4
finished op 4
downloaded
http://www.toysrus.com/graphics/tru_prod_images/Animal-Planet-T-Rex---Grey--pTRU1-2909995dt.jpg
downloaded http://www.badassoftheweek.com/trex.jpg
WHY can't I make web requests fire synchronously with other code? (I know I can use completion blocks and chain them but I'd like to figure out how to do it with NSOperation)
Operation queues are used to schedule asynchronous operations, primarily these operations may be long running and you don't want to block the current (typically UI) thread. Blocking the UI thread leads to unresponsive UI.
When you create 4 operations, when they finish is a factor of what is being performed. In your case, you have operations that are doing println (which is very fast) and you have operations that are downloading from the internet (which is very slow).
The whole point of the operation queue is to allow you to fire these operations asynchronously, and whenever the operations complete, fire the completion handler.
In other words, you do cannot control the sequence.
If you want to control the sequence, my suggestion is to do the following:
Start operation 1
In operation 1's completion handler, start operation 2
In operation 2's completion handler, start operation 3
In operation 3's completion handler, start operation 4
In this way, you still achieve the benefits of Operation queues (you do not block the UI thread), and you can chain the operations in order.

.wait() on a task in c++/cx throws exception

I have a function which calls Concurrency::create_task to perform some work in the background. Inside that task, there is a need to call a connectAsync method on the StreamSocket class in order to connect a socket to a device. Once the device is connected, I need to grab some references to things inside the connected socket (like input and output streams).
Since it is an asynchronous method and will return an IAsyncAction, I need to create another task on the connectAsync function that I can wait on. This works without waiting, but complications arise when I try to wait() on this inner task in order to error check.
Concurrency::create_task( Windows::Devices::Bluetooth::Rfcomm::RfcommDeviceService::FromIdAsync( device_->Id ) )
.then( [ this ]( Windows::Devices::Bluetooth::Rfcomm::RfcommDeviceService ^device_service_ )
{
_device_service = device_service_;
_stream_socket = ref new Windows::Networking::Sockets::StreamSocket();
// Connect the socket
auto inner_task = Concurrency::create_task( _stream_socket->ConnectAsync(
_device_service->ConnectionHostName,
_device_service->ConnectionServiceName,
Windows::Networking::Sockets::SocketProtectionLevel::BluetoothEncryptionAllowNullAuthentication ) )
.then( [ this ]()
{
//grab references to streams, other things.
} ).wait(); //throws exception here, but task executes
Basically, I have figured out that the same thread (presumably the UI) that creates the initial task to connect, also executes that task AND the inner task. Whenever I attempt to call .wait() on the inner task from the outer one, I immediately get an exception. However, the inner task will then finish and connect successfully to the device.
Why are my async chains executing on the UI thread? How can i properly wait on these tasks?
In general you should avoid .wait() and just continue the asynchronous chain. If you need to block for some reason, the only fool-proof mechanism would be to explicitly run your code from a background thread (eg, the WinRT thread pool).
You could try using the .then() overload that takes a task_options and pass concurrency::task_options(concurrency::task_continuation_context::use_arbitrary()), but that doesn't guarantee the continuation will run on another thread; it just says that it's OK if it does so -- see documentation here.
You could set an event and have the main thread wait for it. I have done this with some IO async operations. Here is a basic example of using the thread pool, using an event to wait on the work:
TEST_METHOD(ThreadpoolEventTestCppCx)
{
Microsoft::WRL::Wrappers::Event m_logFileCreatedEvent;
m_logFileCreatedEvent.Attach(CreateEventEx(nullptr, nullptr, CREATE_EVENT_MANUAL_RESET, WRITE_OWNER | EVENT_ALL_ACCESS));
long x = 10000000;
auto workItem = ref new WorkItemHandler(
[&m_logFileCreatedEvent, &x](Windows::Foundation::IAsyncAction^ workItem)
{
while (x--);
SetEvent(m_logFileCreatedEvent.Get());
});
auto asyncAction = ThreadPool::RunAsync(workItem);
WaitForSingleObjectEx(m_logFileCreatedEvent.Get(), INFINITE, FALSE);
long i = x;
}
Here is a similar example except it includes a bit of Windows Runtime async IO:
TEST_METHOD(AsyncOnThreadPoolUsingEvent)
{
std::shared_ptr<Concurrency::event> _completed = std::make_shared<Concurrency::event>();
int i;
auto workItem = ref new WorkItemHandler(
[_completed, &i](Windows::Foundation::IAsyncAction^ workItem)
{
Windows::Storage::StorageFolder^ _picturesLibrary = Windows::Storage::KnownFolders::PicturesLibrary;
Concurrency::task<Windows::Storage::StorageFile^> _getFileObjectTask(_picturesLibrary->GetFileAsync(L"art.bmp"));
auto _task2 = _getFileObjectTask.then([_completed, &i](Windows::Storage::StorageFile^ file)
{
i = 90210;
_completed->set();
});
});
auto asyncAction = ThreadPool::RunAsync(workItem);
_completed->wait();
int j = i;
}
I tried using an event to wait on Windows Runtime Async work, but it blocked. That's why I had to use the threadpool.

Resources