How does Erlang call receive - multithreading

I am looking for information on how Erlang internally handle the receive call.
report(Count) ->
receive
X -> io:format("Received #~p: ~p~n", [Count, X]),
end.
Is receive executed on same thread than other functions?
Do each process is responsible to call his own receive?
Do Erlang use a "god" process that call all the receive?

After a receive statement, the process first checks if there in the mail box any message which matches one of the receive clauses. If not it enters in wait state (interaction with the scheduler, but I have no detail). Then the scheduler will reschedule the process only if a new message is put in the mail box, or if the time out (after clause) occurs.

Related

Rust concurrency question with SyncSender

I am new to Rust and trying to understand the Dining Philosopher code here :
https://google.github.io/comprehensive-rust/exercises/day-4/solutions-morning.html
By the time the execution reaches the following lines in the main thread, isn't it possible that none of the spawned threads have started executing their logic, resulting in nothing in 'rx' and the program simply quitting?
for thought in rx {
println!("{}", thought);
}
When iterating over a channel, it internally calls Receiver::recv, where the documentation specifies
This function will always block the current thread if there is no data available and it’s possible for more data to be sent (at least one sender still exists). Once a message is sent to the corresponding Sender (or SyncSender), this receiver will wake up and return that message.
So the receiver will block until it has data avalible, or all the senders have been dropped.
Yes, execution can reach for thought in rx { ... } before the threads have even started. However, this will still work because iterating over a Receiver will wait until there is a message and will only stop if all Senders have been destroyed (ergo it is no longer possible to receive any messages).

VB6 callback Sub's relation to calling win32 thread, and vice versa

A function in a win32 process (call it void cCB()) calls a VB6 Sub (call it vb6S, which receives some numeric data type data from cCB. cCB originally received vb6S reference via AddressOf.
I've got two basic newbie questions about this:
Question 1: Is cCB's thread blocked by the call while vb6S() executes its code?
Question 2: Will VB have any "issues" if cCB's thread terminates and cCB's
memory is de-allocated before vb6S has completed its work?
EDIT:
In response to the request for code (and thanks for that downvote), here is the issue:
The task is a microsecond timer, to be used to unblock the WM queue of two VB IDE's when debugging two VB apps that are communicating via WM_COPYDATA messages.
That is: there are apps Alice and Bob. When Alice sends Bob a wm74 (WM_COPYDATA) message, Bob hooks on to it, saves the information in the copydatastruct, and then is finished with the message. The problem is that, now, in debugging, both Alice's and Bob's message queues are blocked as long is either debugger is at a break point. It doesn't take long to kill one or both IDE's at that point.
So what I want to implement is an old Java Applet trick: Bob calls out of process, which waits a tiny slice of time, and calls back in. While out of process, Bob can release the message, the app/IDE tells Windows the message is handled, and Alice can go about her business. To do this, before returning, Bob's message handler calls into a win32 DLL function, say:
typedef void (__stdcall *FUNCPTR)(int);
void __stdcall ExecuteCallback(FUNCPTR cbAddress, double microS){};
which Bob calls as
ExecuteCallback AddressOf My74Processor, 250
where My74Processor contains the app logic to process the string according to the cases in the code number that Alice put in the message structure.
ExecuteCallback grabs an existing timingWork code object from a queue of those, and puts it into a queue of threads via a workQueue manager that knows to start the thread at the proper point in the timingWork class.
After the timingWork timer clocks 250 microseconds (in this case) via watching QueryPerformanceCounter, it calls back to Bob's Sub at
Sub My74Processor(ByVal reason as Long)
and terminates. The timingWork object is not destroyed; the last thing it does is put itself back in the queue of available objects. The thread, on the other hand, terminates, and is available in the thread queue for more, maybe different, work when its turn comes around again.
Back with Bob, there is a breakpoint set in the processing code down in Sub My74Processor. My questions, then, are in this context:
(a) when that breakpoint is hit, and the programmer takes some time to check variables and logic, and then continues the process, will all be fine in Bob's IDE's stack after My74Processor ends?
(b) when the timingWork routine makes the call back, will that thread be blocked?
I'm virtually certain the answer to (b) is "no." I'm worried about, and not experienced enough with VB6 to know the answer to that.
EDIT2 #wqw, I came to the same conclusion as I was going to sleep last night, but not for the correct reasons you state in your comment.
How about this: use the same out of process call, only instead of a callback function, send cCB information for it to send a WM_COMMAND message to VB6S, spoofing a state change of one of its controls, say a button, butCB? Then ExecuteCallback(..) becomes ExecuteClickback(..)
void __stdcall ExecuteClickback(WORD butID, HANDLE hBut, HANDLE hVB6S, double microS){};
The timerWork object does a PostMessage of a WM_COMMAND to hVB6S's Form with:
wParam = (DWORD) ( (butID << sizeof(WORD)) + (WORD) BN_CLICKED ); lParam = hBut;
Then, VB6S's form should raise its butCB_Click() event. That event handler would have the instruction to call My74Processor(). Since VB6S is necessarily in a pure message wait state once the call to ExecuteClickback is made, the arrival of the WM_COMMAND should "look" to VB6S just like the user actually clicked the button.

Is safe and good design AllocateHWND to respond more than one thread?

It's known that, in cases when one needs comunicate between UI thread and working thread, an hidden window must be created because of thread safety(handle reconstruction).
For exemplify:
Form1 has N dynamicaly created TProgressBar instances with the same name of a background running .
Is always garanteed that WM_REFRESH will only be called inside Task Thread.
Form1 has H : THandle property that allocates the following procedure:
procedure RefreshStat(var Message: TMessage); message WM_REFRESH;
Inside RefreshStat, in cases when there is only 1 background thread I could easily use L and W parameter to map Task Id and position.
I don't know if the title says what I want to know, but let's imagine if we have an application that has multiple background tasks running.
In my case I use TProgressBar to report progress the done.
Does AllocateHwnd garantee that all messages arrives with no race condition the hidden window?
What happens if two or more tasks post the message at the same time?
If this needs to be controled manually, I wonder if there is something else to do besides creating another message loop system in the custom message.
I hope the question is clear enough.
The message queue associated with a thread is a threadsafe queue. Both synchronous and asynchronous messages from multiple other thread are delivered safely no harmful date races. There is no need for any external synchronization when calling the Windows message API functions like SendMessage and PostMessage.
If two threads post or send messages to the same window at the same time, then there is no guarantee as to which message will be processed first. This is what is known as a benign race condition. If you want one message to be processed before the other then you must impose an ordering.

Are the signal-slot execution in Qt parallelized?

I have a basic Qt question on the way it handles Signals and Slots. I am very new to the framework, so pardon me if it sounds stupid. I was wondering if I have certain signals connected to certain slots.
signal1() ---> slot1(){ cout <<"a"; }
signal2() ---> slot2(){ cout <<"b"; }
signal3() ---> slot3(){ cout <<"c"; }
And in my code I call
emit signal1();
emit signal2();
emit signal3();
Does Qt guarantee to print out "abc" to the screen, in other words process the slots sequentially? Or will it spawn a separate thread to execute each slot?
Thanks!
By default:
1) If the signal is emitted in the thread which the receiving object has affinity then the slots connected to this signal are executed immediately, just like a normal function calls. Execution of the code following the emit statement will occur once all slots have returned.
2) Otherwise, the slot is invoked when control returns to the event loop of the receiver's thread. The code following the emit keyword will continue immediately, and the slots will be executed later in the receiver's thread.
More info about connection types here: http://qt-project.org/doc/qt-4.8/threads-qobject.html#signals-and-slots-across-threads
Just to add to Kotlomoy's correct answer :)
You can also control the type of connection from the default by supplying the optional parameter ConnectionType:
connect(obj, signal, obj, slot, connectionType)
Where your main options are:
Qt::QueuedConnection: This will only run when control returns to the event loop of the thread. I.e. will be added to the queue. specify this if you don't want your slot to be processed immediately which can be very useful.
Qt::DirectConnection: Alternatively you can specify direct connection (even between threads if you want), but generally you do not need or want to use this option since it is default when a signal is emitted to a slot within the same thread.
If you use QueuedConnection you grantee "abc" to be printed to the screen in that order.
Its worth noting if a directConnect event occurs while you are processing a previous slot (lets say some other external event triggers a signal like an IpSocket input) then you will get "interrupted". This won't happen in your simple example.

Is there a pattern to cancel a block on another dispatch queue?

This could be a much more generic question abut how to best cancel blocking jobs on other threads, but I'm interested in a solution in the context of Grand Central Dispatch. I have the need to call a function which basically blocks until it gets data from the network; it could potentially be blocked forever. I have it set up now so that this blocked call happens on a private dispatch queue, and when i do get data, i put a block back on the main queue. Th e problem is that once I dispatch my private-queue-block and blocking call, I can never really cancel that. Imagine this ability was tied to a user setting toggle. If they toggled off, I would want this blocking job and execution block to essentially just end. Is there a good solution to this type of problem?
Thanks
- (void)_beginListeningForNetworkJunk
{
dispatch_async(my_private_queue, ^{
// blocks until it gets data
id data = [NetworkListener waitForData];
dispatch_async(dispatch_get_main_queue(), ^{
[self _handleNetworkData:data];
});
});
}
- (void)_endListeningForNetworkJunk
{
// How do I kill that job that is blocked on my private queue?
}
You can't. The problem is in NetworkListener in its blocking-and-uninterruptible interface.
Normally, you'd code the block to service the network connection asynchronously and also monitor some other signalling mechanism, such as a custom run loop source (or NSPort or pipe file descriptor or …). When the network connection had activity, that would be serviced. When the signalling mechanism fired, you would shut down the network connection and exit the block.
In that way, the block could be cancellable with its cooperation.
Since your block is stuck in -waitForData, it can't cooperate. There's no mechanism for canceling blocks without their cooperation. The same is true of NSOperation and NSThread. The reason is that it's basically infeasible to terminate another thread's activity without its cooperation.
You need a different design for your networking code.
In principle, you can't cancel anything running on any other thread. You can only politely ask the task that is running on another thread to cancel. I usually create objects representing tasks so that "cancel" can be called on these objects.
In your situation: The waitForData cannot be cancelled (unless NetworkListener has some API to do it; in that case waitForData would need some mechanism to distinguish between data arriving and cancellation).
In _endListenForNetworkJunk, you can set a BOOL value "cancelled" to indicate the call is cancelled. Then in the code that runs on the main queue, check whether that "cancelled" value is still cleared. That way, if you call _endListenForNetworkJunk from the main thread, you're sure that _handleNetworkData will not be called. If you call _endListenForNetworkJunk from another thread, the main thread could just have started the call to _handleNetworkData.
If you checked "cancelled" just before dispatching to the main queue, that block could already be dispatched but not executing just before you call _endListenForNetworkJunk on the main thread.

Resources