Understanding the idea of asyncAndWait - multithreading

I have found some API lacking documentation and I'm hoping someone out there understands what it's about.
https://developer.apple.com/documentation/dispatch/dispatchqueue/3656285-asyncandwait
asyncAndWait(execute:)
This sounds like an oxymoron. Surely waiting on the thread that dispatched this means it was dispatched sync (not async)?

Related

RTOS: requesting non-sleeping task to wake up causes next call to sleep() to not sleep - is that good?

I'm rewriting existing real-time kernel TNKernel; I have used it for a couple of years, but I don't like many of its design decisions (as well as implementation details), so I decided to fork it and have fun implementing what I want. Anyone who is interested might read additional information at the project page on bitbucket.
TNKernel has one strange, in my opinion, feature: it has service tn_task_sleep(int timeout) which puts current task to sleep, and it has tn_task_wakeup(struct TN_Task *task) which wakes currently sleeping task up.
The strangeness is that it is legal to call tn_task_wakeup() on non-sleeping task; in this case, special flag like wakeup_request will be set, and on the next call to tn_task_sleep() this flag will be cleared, and task won't sleep.
All of this seems to me as a completely dirty hack, it probably might be used as a workaround to avoid race condition problems, or as a hacky replacement for semaphore.
It just encourages programmer to go with hacky approach, instead of creating straightforward semaphore and provide proper synchronization. So, I'm willing to remove this service from my project. Is this good idea to get rid of it, or have I missed something important? Why do we ever need that?
Since no one said that I'm wrong, I assumed that I'm right and removed these strange "features" from my kernel.

Creating promise in one thread and setting it in another

Can I have an boost::promise<void> created in a thread and set its value in another different thread through boost::promise<void>::set_value().
I think I am having a crash because of this, probably, so I must guess that no, but I would need confirmation. Thanks in advance.
P.S.: Note that I am using boost implementation.
Yes, you can do that, but you must ensure that the call to set_value() does not conflict with anything in the other thread, such as the completion of the constructor or the start of the destructor.
(According to the C++ standard you cannot even make potentially concurrent calls to set_value() and get_future() but that is a defect and should get fixed.)
To give a more precise answer it would be necessary to see exactly what your code is doing.

Producer/Consumer in the kernel space - Linux

I would like to have one thread to queue some requests in a request queue and another to serve these requests. The producer should wake up the consumer when there is a new request queued.
Is there anyone who has done this already or knows how to do it?
I have tried several tutorials on the internet and none of them really worked cleanly. They either miss a request, cause a system lockup/instability, or they just do not terminate.
Note: My question in essence is similar to this one. However, I wont be specific like the one who asked that question. Anyone who can/willing to help can just throw his two cents and may be we can work something out.
Thanks!
You can use Work Queues. Work Queues are simple, once you set up up your work queue, you use something like the following:
DECLARE_WORK(name, void (*function)(void *), void *data);
Your function call will be scheduled and called later, take a look at this article.
I also highly recommend you this book: Linux Device Drivers
edit: I just saw you already linked an SO post where they use work queues. Have you tried it out? You run into some issues? I suggest you start with an really simple example, just to try out if it's working. Implement your core functionality later.
Update:
From the official Documentation:
Some users depend on the strict execution ordering of ST wq. The
combination of #max_active of 1 and WQ_UNBOUND is used to achieve this
behavior. Work items on such wq are always queued to the unbound
worker-pools and only one work item can be active at any given time
thus achieving the same ordering property as ST wq.
That way you will have a guaranteed FIFO execution of your workers. But be aware that the work may be executed on different CPUs. You have to use memory barriers to ensure visibility (eg. wmb()).
Update:
As #user2009594 mentioned, a single threaded wq can be created using the following macro defined in linux/workqueue.h:
#define create_singlethread_workqueue(name) \
alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM, 1, (name)))
Multicast Netlink sockets can work here greatly. Recently I did the same; only difference was that my consumer was in kernel while producers in user space: same can be used in kernel only space.

How to see what started a thread in Xcode?

I have been asked to debug, and improve, a complex multithreaded app, written by someone I don't have access to, that uses concurrent queues (both GCD and NSOperationQueue). I don't have access to a plan of the multithreaded architecture, that's to say a high-level design document of what is supposed to happen when. I need to create such a plan in order to understand how the app works and what it's doing.
When running the code and debugging, I can see in Xcode's Debug Navigator the various threads that are running. Is there a way of identifying where in the source-code a particular thread was spawned? And is there a way of determining to which NSOperationQueue an NSOperation belongs?
For example, I can see in the Debug Navigator (or by using LLDB's "thread backtrace" command) a thread's stacktrace, but the 'earliest' user code I can view is the overridden (NSOperation*) start method - stepping back earlier in the stack than that just shows the assembly instructions for the framework that invokes that method (e.g. __block_global_6, _dispatch_call_block_and_release and so on).
I've investigated and sought various debugging methods but without success. The nearest I got was the idea of method swizzling, but I don't think that's going to work for, say, queued NSOperation threads. Forgive my vagueness please: I'm aware that having looked as hard as I have, I'm probably asking the wrong question, and probably therefore haven't formed the question quite clearly in my own mind, but I'm asking the community for help!
Thanks
The best I can think of is to put breakpoints on dispatch_async, -[NSOperation init], -[NSOperationQueue addOperation:] and so on. You could configure those breakpoints to log their stacktrace, possibly some other info (like the block's address for dispatch_async, or the address of the queue and operation for addOperation:), and then continue running. You could then look though the logs when you're curious where a particular block came from and see what was invoked and from where. (It would still take some detective work.)
You could also accomplish something similar with dtrace if the breakpoints method is too slow.

How do I get the current state of a thread (e.g. blocking, suspended, running, etc..) in win32?

I couldn't find a documented API that yields this information.
A friend suggested I use NtQuerySystemInformation. After looking it up, the information is there (see SYSTEM_THREAD ) but it is undocumented, and not very elegant - I get the information for all threads in the system.
Do you know of a more elegant, preferably documented API to do this?
There is no other way than using NtQuerySystemInformation.
However it could be less complicated, that's true, but Microsoft lacks an implementation.
I posted a working class here that is very elegant to use:
How to get thread state (e.g. suspended), memory + CPU usage, start time, priority, etc

Resources