I have a system with multiple I/O interfaces, and I'm collecting the output of all of them into a common log. Two of the interfaces are through well-behaved channels that are opened as file-like objects, and can be managed on an event basis with "fileevent readable". The other two are the problem.
These are to a vendor-supplied library, which someone else has already kindly wrapped into a Tcl package (snoopy, FWIW). However, the only read access is a blocking call, and there's nothing in the package that would cause an event equivalent to a fileevent.
I've figured out how to spawn a separate thread to block on the read, pull the result, and put it into a message queue for the main thread. But having the main thread block on reading the queue would seem to defeat the purpose, especially since there are two queues it would have to block on. And I haven't been able to get the reader to generate an event that can trigger the main thread to read the queue.
I've looked on the tcl.tk wiki to no avail so far. I've tried using the uevent library to generate an event on a message push, but the event goes to the writing thread instead of the reading thread, which really doesn't help. It seems like there should be some solution related to a Thread condition variable, but so far I haven't been able to find an appropriate design pattern for that use of the library.
If all else fails I'll fall back to a Tk event, but I'm trying to keep Tk out of this as it's meant to be an automated system with no GUI, and any mention of Tk pushes tclsh into wish and pops up a GUI window.
I feel like I'm close, but just missing something.
Firstly, the main thread needs to run the event loop in order to receive events. The idiomatic way to do this is to use vwait forever once you've finished setting up your program (I assume you're not going to write to that variable) but if you are running Tk you already have an event loop (GUIs need event loops).
There are two ways to do the messaging between threads.
Thread events
The thread::send command uses events to dispatch code to execute (the message) between threads. All you need to do is to tell the worker thread what the main thread's ID is so it knows where to send to. Note that you may well want to send the event asynchronously, like this:
thread::send -async $mainID [list eventReceiver "something happened" $payload]
Pipelines
If you're using Tcl 8.6, you can use chan pipe to create an unnamed OS pipeline. You can then use normal fileevents, etc., to deliver information from thread to another that way.
# In master
lassign [chan pipe] readSide writeSide
thread::transfer $worker $readSide
thread::send $worker [list variable pipe $readSide]
fconfigure $writeSide -blocking 0
fileevent $writeSide readable [list handleLine $writeSide]
# In worker
fconfigure $pipe -blocking 0 -buffering line
puts $pipe "got event: $payload"
It's probably easier to use thread events in retrospect! (The main advantage of a pipe is that you can also put the worker in another process if necessary.)
I finally grokked what Donal was saying about Thread events. I blame inadequate morning caffeine for not getting it the first time.
All the prior examples of thread::send I've seen concerned a master sending scripts down to the worker thread. In this case, the worker thread needs to send scripts back up to the master, where the script is the one that would be called on [fileevent readable] if this was a channel.
Here's my test code for the interaction:
proc reader {payload} {
puts $payload
}
set t1 [thread::create]
thread::send -async $t1 {
proc produce {parentid} {
while 1 {
after 250 ;# substitutes for
incr data ;# the blocking read
thread::send $parentid "reader $data"
}
}
}
set tid [thread::id]
thread::send -async $t1 [list produce $tid]
vwait forever
The part I saw but didn't immediately grok was the importance of the master thread having an ID that can be sent to the worker. The key that I'd missed was that the worker can send scripts to the master just as easily as the master can send scripts to the worker; the worker just usually doesn't know the master's Thread ID.
Once the ID is passed to it, the producer thread can therefore use thread::send to call a proc in the master to handle the data, and it becomes an event in the master thread, just as I'd desired. It's not the way I've worked with threads in the past, but once understood it's powerful.
Related
I am trying to execute one procedure on an auxiliary thread while the rest of my code will be executed in parallel. I have this small example:
set thread_loadmaterials [thread::create]
thread::send -async $thread_loadmaterials [list wa "hola from thread_loadmaterials"]
thread::send -async [thread::id] [list wa "hola from thread::id"]
NOTE: The created thread, checked by thread::exists procedure, exists.
The thread_loadmaterials has the created thread id and it's a different id than the active
thread ([thread::id]).
To my surprise, the thread identified by [thread::id] is showing the message but the thread that I create, don't do anything.
If someone can help me to understand it, I will be a lot grateful.
The man page on thread::send -async says:
The target thread must enter it's event loop in order to receive
scripts sent via this command. [...] Threads can enter the event loop
explicitly by calling thread::wait or any other relevant Tcl/Tk
command, like update, vwait,
You never have the main thread (as identified by thread::id) in your script enter the event loop, e.g., by calling vwait
That said, there is no real need of thread::send'ing to the self, and here main thread.
I have a thread which creates a named pipe like this:
CreateNamedPipe('\\\\.\\pipe\\dirwatcher', PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED, PIPE_TYPE_BYTE, 1, 1, 1, 0, null);
I then spawn another thread called "poller" and this one has watches files. I want to make it wait for this pipe though, so I can interrupt the infinite poll.
I create the pipe in this poller thread like this:
pipe = CreateFile('\\\\.\\pipe\\dirwatcher', GENERIC_READ, FILE_SHARE_READ, null, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, null);
I then try to wait on this pipe like this:
For this case LP_HANDLES is an array with just one element, pipe:
rez_wait = WaitForMultipleObjectsEx(LP_HANDLES.length, LP_HANDLES, false, INFINITE, true);
console.log('rez_wait:', rez_wait);
However this doesn't wait at all, and immeidately return 0. I used overlapped flag in both creating the named pipe and connecting to it. Does anyone know how to fix this?
Here is my code specifically, it is js-ctypes:
Creating the pipe:
https://github.com/Noitidart/jscFileWatcher/blob/master/DirectoryWatcherWorkerSubscript.js#L91-L99
Getting the pipe in poller thread:
https://github.com/Noitidart/jscFileWatcher/blob/master/DirectoryWatcherPollWorker.js#L53-L64
Waiting on the pipe:
https://github.com/Noitidart/jscFileWatcher/blob/master/DirectoryWatcherPollWorker.js#L89
Thanks
You cannot wait for pipes using WaitForMultipleObjectsEx as stated within the remarks:
The WaitForMultipleObjectsEx function can specify handles of any of the following object types in the lpHandles array:
Change notification
Console input
Event
Memory resource notification
Mutex
Process
Semaphore
Thread
Waitable timer
You have to use PeekNamedPipe for this job (which doesn't wait if the pipe is empty) or you can simply do a blocking ReadFile on the pipe.
I've encountered the same problem a while ago and I've solved it by waiting for an event with WaitForSingleObject which I set if I've put something into the pipe. I had to do this since I had several pipes and only one "reader thread" which woke up by the event and the used PeekNamedPipe to check which pipe contains data (and I didn't want to create several "reader threads" which are blocked within a ReadFile call).
Anyway to me it is quite strange that you can wait for almost anything using WaitForMultipleObjects except pipes (which is very annoying!).
I have a small sample program that hangs on perl 5.16.3. I am attempting to use an alarm to trigger if two threads don't finish working in time in a much more complicated program, but this boils down the gist of it. I know there's plenty of other ways to do this, but for the sake of argument, let's say I'm stuck with the code the way it is. I'm not sure if this is a bug in perl, or something that legitimately shouldn't work.
I have researched this on the Internet, and it seems like mixing alarms and threads is generally discouraged, but I've seen plenty of examples where people claim that this is a perfectly reasonable thing to do, such as this other SO question, Perl threads with alarm. The code provided in the accepted answer on that question also hangs on my system, which is why I'm wondering if maybe this is something that's now broke, at least as of 5.16.3.
It appears that in the code below, if I call join before the alarm goes off, the alarm never triggers. If I replace the join with while(1){} and go into a busy-wait loop, then the alarm goes off just fine, so it appears that join is blocking the SIGALRM for some reason.
My expectation is that the join happens, and then a few seconds later I see "Alarm!" printed on the screen, but this never happens, so long as that join gets called before the alarm goes off.
#!/usr/bin/env perl
use strict;
use warnings;
use threads;
sub worker {
print "Worker thread started.\n";
while(1){}
}
my $thread = threads->create(\&worker);
print "Setting alarm.\n";
$SIG{ALRM} = sub { print "Alarm!\n" };
alarm 2;
print "Joining.\n";
$thread->join();
The problem has nothing to do with threads. Signals are only processed between Perl ops, and join is written in C, so the signal will only be handled when join returns. The following demonstrates this:
#!/usr/bin/env perl
use strict;
use warnings;
use threads;
sub worker {
print "Worker thread started.\n";
for (1..5) {
sleep(1);
print(".\n");
}
}
my $thread = threads->create(\&worker);
print "Setting alarm.\n";
$SIG{ALRM} = sub { print "Alarm!\n" };
alarm 2;
print "Joining.\n";
$thread->join();
Output:
Setting alarm.
Joining.
Worker thread started.
.
.
.
.
.
Alarm!
join is essentially a call to pthread_join. Unlike other blocking system calls, pthread_join does not get interrupted by signals.
By the way, I renamed $tid to $thread since threads->create returns a thread object, not a thread id.
I'm going to post an answer to my own question to add some detail to ikegami's response above, and summarize our conversation, which should save future visitors from having to read through the huge comment trail it collected.
After discussing things with ikegami, I went and did some more reading on perl signals, consulted some other perl experts, and discovered the exact reason why join isn't being "interrupted" by the interpreter. As ikegami said, signals only get delivered in between perl operations. In perl, this is called Deferred Signals, or Safe Signals.
Deferred Signals were released in 5.8.0, back in 2002, which could be one of the reasons I was seeing older posts on the Net which don't appear to work. They probably worked with "unsafe signals", which act more like signal delivery that we're used to in C. In fact, as of 5.8.1, you can turn off deferred signal delivery by setting the environment variable PERL_SIGNALS=unsafe before executing your script. When I do this, the threads::join call is indeed interrupted as I was expecting, just as pthread_join is interrupted in C in this same scenario.
Unlike other I/O operations, like read, which returns EINTR when a signal interrupts it, threads::join doesn't do this. Under the hood it's a call to the C library call pthread_join, which the man page confirms does not return EINTR. Under deferred signals, when the interpreter gets the SIGALRM, it schedules delivery of the signal, deferring it, until the threads::join->pthread_join library call returns. Since pthread_join doesn't "interrupt" and return EINTR, my SIGALRM is effectively being swallowed by the threads::join. With other I/O operations, they would "interrupt" and return EINTR, giving the perl interpreter a chance to deliver the signal and then restart the system call via SA_RESTART.
Obviously, running in unsafe signals mode is probably a Bad Thing, so as an alternative, according to perlipc, you can use the POSIX module to install a signal handler directly via sigaction. This then makes the one particular signal "unsafe".
I am trying to search information if inter process communication can happen with tcl threads. I am biginner on this topic so right now just collecting information. I understand that sender and receiver mechanism to be coded to pass data between processess. And tcl thread package provides send command. Also thread can be used as timer for spawn process inside the same.
Is it possible to recieve data from thread to another thread?
Thanking you.
#contains of test.tcl
puts stdout "hello from wish"
# end of file
# set cmd
set exe {wish85.exe}
set exepath [list $exe test.tcl]
# This next line is slightly magical
set f [open |$exepath r+]
# Use the next line or you'll regret it!
puts $f {fconfigure stdout -buffering line}
fileevent $f readable "getline $f"
proc getline f {
if {[gets $f line]<0} {
close $f ;
return ;
}
puts "line=$line"
}
You need to be much clearer in your mind about what you are looking for. Threads are not processes! With Tcl, every Tcl interpreter context (the thing you make commands and variables in) is bound to a single thread, and every thread is coupled to a single process.
Tcl has a Thread package for managing threads (it should be shipped with any proper distribution of Tcl 8.6) and that provides a mechanism for sending messages between threads, thread::send. Those messages? They're executable scripts, which means that they are really flexible.
For communication between processes, things are much more complicated because you have to consider both discovery of the other processes and security (because processes are a security boundary by design). Here are some of the options:
Tcl is very good at running subprocesses and talking with them via pipes. For example, you can run a subordinate interpreter in just a couple of lines using open:
# This next line is slightly magical
set mypipeline [open |[list [info nameofexecutable]] r+]
# Use the next line or you'll regret it!
puts $mypipeline {fconfigure stdout -buffering line}
It even works well with the fileevent command, so you can do asynchronous processing within each interpreter. (That's really quite uncommon in language runtimes, alas.)
The send command in Tk lets you send scripts to other processes using the same display (I'm not sure if this works on Windows) much as thread::send does with threads in the same process.
The comm package in Tcllib does something very similar, but uses general sockets as a communication fabric.
On Windows, you can use the dde command in Tcl to communicate with other processes. I don't think Tcl registers a DDE server by default, but it's pretty easy to do (provided you are running the event loop, but that's a common requirement for most of the IPC mechanisms to work at their best).
More generally, you can think in terms of running webservices and so on, but that's getting quite complicated!
I want to use one thread to get fields of packets by using tshark utility (using system () command) whos output is then redirected to a file. This same file needs to be read by another thread simultaneously, so that it can make runtime decisions based on the fields observed in the file.
The problem I am having currently now is even though the first thread is writing to the file, the second thread is unable to read it (It reads NULL from the file). I am not sure why its behaving this way. I thought it might be due to simultaneous access to the same file. I thought of using mutex locks but that would block the reading thread, since the first thread will only end when the program terminates.
Any ideas on how to go about it?
If you are using that file for interprocess communication, you could instead use named pipes or message queues instead. They are much easier to use and don't require synchronization because one thread writes and the other one reads when data is available.
Edit: For inter-thread communication you can simply use shared variables and a conditional variable to signal when some data has been produced (a producer-consumer pattern). Something like:
// thread 1
while(1)
{
// read packet
// write packet to global variable
// signal thread 2
// wait for confirmation of reading
}
// thread 2
while(1)
{
// wait for signal from thread 1
// read from global variable
// signal thread 2 to continue
}
The signal parts can be implemented with conditional variables: pthread_cond_t.