Inserting text to text box in tcl tk through thread - multithreading

I'm currently developing gui in tcl tk. In the gui I am trying to read file continuously as like (tail -f).I want to put data in to text box. I have created thread for separating this taks and keep gui alive. But It don't allow me to use text widget name inside the thread.
So I'm stuck with the question how to pass text widget name to the thread.
ttk::notebook .note -width 1000 -height 450
ttk::frame .note.tabOne;
ttk::frame .note.tabTwo;
.note add .note.tabOne -text "Test Configuration"
.note add .note.tabTwo -text "Results"
set .note.tabTwo.results [ScrolledWindow .note.tabTwo.results ]
pack .note.tabTwo.results -fill both -expand true
set resulttxt [text .note.tabTwo.results.resulttxt -wrap none -width 63 -height 10]
.note.tabTwo.results setwidget $resulttxt
pack .note
proc displayText {$file} {
global file
global resulttxt
set data "hithi fio ui"
$resulttxt insert end $data
exec fio $file --status-interval=1 > test1.txt 2>&1 &
set t [thread::create]
thread::send -async $t [list set yourxs $resulttxt]
thread::send $t {
global resulttxt
puts $yourxs
# Load the Tcllib fileutil package to use its
# findByPattern procedure.
#package require fileutil
#set files [fileutil::findByPattern [pwd] *.tcl]
set log [open /media/sf_Tcl/bin/project/fio_ui-1.0/test1.txt r]
for { } { true } { after 1000 } {
#after [expr { int(500*rand()) }]
$yourxs insert end $data1
}
close $log
}
}

In Tcl, all command names (including all widget names) are local to a particular interpreter instance, and instances are bound to a thread. The non-GUI thread will have to send code to the GUI thread to do the insert. The easiest way to handle this is to make the GUI thread create a command in the non-GUI thread to do the insertion.
thread::send $t [list proc appendToTextWidget string \
"thread::send [thread::id] .note.tabTwo.results.resulttxt end \$string"]
The quoting required is a little tricky, since you can't just use list to do all the work, but it is fairly straight forward since thread::id is guaranteed to return a metacharacter-free word.
Once you've done that, the non-GUI thread can just do:
appendToTextWidget "here's a message\n"
and that will add the text at the end, concealing all the details of how that happens from the processing code.

Related

Fork and return without waiting for child in perl

I have a perl CGI program that uses fork() to launch a child process. The child does some long-running (~60 second) calculations and eventually saves the results in a tmp file. The parent is supposed to return control to the browser, which displays a progress message and checks periodically whether the child has written its tmp file; once the file appears, the results are displayed. So the code looks like this:
# Get a unique name for this job
my $guid = Data::GUID->new;
my $id = $guid->as_hex;
# Do the fork
my $pid = fork;
if (!defined $pid) {
die "cannot fork: $!";
# Child does this
} elsif ($pid == 0) {
# Do some calculations
# Save results in a filename with $id
# Parent does this
} else {
# Return the location of the tmp files to the client
return "Content-Type: text/html\n\n", $id;
# Browser uses contents of $id to check for the result file
}
I originally set this up on RedHat Linux, and it worked just fine. However, now I'm trying to port it to a server running Ubuntu, and it seems that the parent is waiting for the long-running child to finish before it returns. I believe this to be the case because the browser hangs for the duration of the calculation, never shows the progress page, and immediately jumps to the results once the calculation is done.
I suspect this has something to do with fork emulation, but I'm not certain, and I haven't been able to find a way around it. Thanks for any suggestions.
EDIT: This code is part of a subroutine - I use CGI::Application, so the calling code just "uses" this code. Hence the return statement. I doubt this accounts for the problem, though, because I haven't touched the code since porting from Red Hat (where it worked) to Linux (where it doesn't).
Randal Schwartz's Watching long processes through CGI provides a useful example. It is straightforward to fork:
if ( my $pid = fork ) { # parent does
delete_all(); # clear parameters
param( 'session', $session );
print redirect( self_url() );
}
elsif ( defined $pid ) { # child does
close STDOUT; # so parent can go on
unless ( open F, "-|" ) {
open STDERR, ">&=1";
exec "/usr/sbin/traceroute", $host;
die "Cannot execute traceroute: $!";
}
The rest of the article explains how to help the user keep track of a running job.

Using Threads in tcl, need to execute the script and watch log file for script hung

My scenario is: using tcl, I am writing a file. then I am sourcing that file and want to keep a watch on log file that the file will generate during its execution. If the size of log file does not change after 2 hours then I need to stop the execution of file and rerun tcl script which will regenerate the file and then source it(generate-source cycle continue till file execution done completely)
Here is psedo code of my scenario :
set root /home/nikhil/
set group {all}
set TopScript [open $root/TopScript.tcl w]
puts $TopScript "[exec perl $root/extract_excel.pl $group] \n}"
Puts $TopScript "set logfilename $root/logfile"
puts $TopScript "source $root/main_1.tcl"
puts $TopScript "source $root/main_2.tcl"
close $TopScript
#Pseudo code for scenario what I want is:
thread::create {
exec tclsh /home/nikhil/TopScript.tcl
thread::wait
}
thread::create {
set flag_for_interupt 0
while{!flag_for_interupt} {
set old_log_size [file size $root/logfile]
after [expr {int(1000* 60* 60* 2)}]
set new_log_size [file size $root/logfile]
if{$old_log_size == $new_log_size} {
puts "I suspect Test is in hung state.... checking again after 2 hours.....\n"
after [expr {int(1000* 60* 60* 2)}]
set $new_log_size [file size $root/logfile]
if{$old_log_size == $new_log_size} {
puts "\n\n Test is in hung state.... log has not updated since last 4 hours........\n\n"
}
########## enter code to interupt main thread and execute again
set flag_for_inturept 1
}
}
}
Tcl does not share (normal) variables between threads. Instead, you need to work by sending messages between threads. A message is just a (usually short) script that you ask the other thread to run (the result of the script can be handled in a few ways, including a synchronous wait or by running the script disconnected). Most of the time, you set up a procedure in the receiving thread to actually do the work.
Let's restructure your waiting thread to operate that way:
set waiter [thread::create {
proc do {filename targetThread returnMessage} {
set delay [expr {int(1000* 60* 60* 2)}]
while true {
# This would be really a do-while loop, but we don't have those
while true {
set old_log_size [file size $filename]
after $delay
set new_log_size [file size $filename]
if {$old_log_size == $new_log_size} break
}
puts "I suspect Test is in hung state... checking again after 2 hours...\n"
after $delay
set new_log_size [file size $filename]
if {$old_log_size == $new_log_size} break
}
puts "\n\n Test is in hung state... log has not updated since last 4 hours...\n\n"
# Send message to main thread to do something about the hung test
thread::send -async $targetThread $returnMessage
}
thread::wait
}]
We'd set that thread actually working like this:
thread::send -async $waiter [list do $root/logfile [thread::current] {set test_hung 1}]
However, the only long operations in there are the calls to after. (Well, unless you're fantastically unlucky with the OS calls to get the log file size.) That means we can convert to using an asynchronous form in the thread, leaving the thread open to being accessed while it is working.
set waiter [thread::create {
proc do {filename targetThread returnMessage} {
set delay [expr {int(1000* 60* 60* 2)}]
set old_log_size [file size $filename]
# Schedule the run of do2 in two hours
after $delay [list do2 $filename $targetThread $returnMessage $delay $filename $old_log_size]
}
proc do2 {filename targetThread returnMessage delay filename old_log_size} {
set new_log_size [file size $filename]
if {$old_log_size == $new_log_size} {
puts "I suspect Test is in hung state... checking again after 2 hours...\n"
# Schedule the run of do3 in another two hours
after $delay [list do3 $filename $targetThread $returnMessage $delay $filename $old_log_size]
} else {
# An update did happen; run ourselves again in two hours to compare to the new size
after $delay [list do2 $filename $targetThread $returnMessage $delay $filename $new_log_size]
}
}
proc do3 {filename targetThread returnMessage delay filename old_log_size} {
set new_log_size [file size $filename]
if {$old_log_size == $new_log_size} {
puts "\n\n Test is in hung state... log has not updated since last 4 hours...\n\n"
# Send message to main thread to do something about the hung test
thread::send -async $targetThread $returnMessage
} else {
# An update did happen; run ourselves again in two hours to compare to the new size
after $delay [list do2 $filename $targetThread $returnMessage $delay $filename $new_log_size]
}
}
thread::wait
}]
So… we've got manageability but lost readability (the API for use is identical). Not bad but not great! (This sort of restructuring is known as conversion to Continuation-Passing Form, and it tends to destroy code readablity.) In 8.6 we can do better because we have coroutines that can yield to the thread's event loop.
set waiter [thread::create {
proc do {filename targetThread returnMessage} {
coroutine Coro[incr ::Coro] doBody $filename $targetThread $returnMessage
}
proc delayForTwoHours {} {
set delay [expr {int(1000* 60* 60* 2)}]
after $delay [info coroutine]
yield
}
proc doBody {filename targetThread returnMessage} {
while true {
while true {
set old_log_size [file size $filename]
delayForTwoHours
set new_log_size [file size $filename]
if {$old_log_size == $new_log_size} break
}
puts "I suspect Test is in hung state... checking again after 2 hours...\n"
delayForTwoHours
set new_log_size [file size $filename]
if {$old_log_size == $new_log_size} break
}
puts "\n\n Test is in hung state... log has not updated since last 4 hours...\n\n"
# Send message to main thread to do something about the hung test
thread::send -async $targetThread $returnMessage
}
thread::wait
}]
That (which still has the same API calling convention) gives manageability yet keeps virtually all the code (especially apart from the short bits in their own procedures) looking the same as the first version I wrote. Under the covers, the coroutine does the rewriting to continuation-passing form, but that's now handled by the Tcl runtime instead of needing to be done explicitly in your code. (Also, Tcl uses explicit coroutine launching, but that in turn means that it can yield across multiple stack levels without the complex yield chains of some other languages.)
I leave it as an exercise to use the second or third version as the basis for a version of the code which doesn't need extra threads at all. Running processes in the background also doesn't need threads; this whole management process can work with just a single (user-visible) thread.

TCL thread::send upvar variable not being set

I have code that looks like this:
package require Thread
proc p1 {} {
set tid [thread::create {
proc executeCommand {command} {
return $command
}
thread::wait
}]
set result ""
::thread::send -async $tid [list executeCommand {"Hello thread world"}] result
#***Do some additional stuff***
vwait result
::thread::release $tid
puts $result
return $result
}
p1
After sourcing the .tcl file that holds this code, my expectation is for the child thread to return "Hello thread world" after vwait is called and the 'result' variable to be printed out, but neither of these happen. It appears that the 'result' variable is remaining blank.
Whats strange is that when I take the code out of a procedure (proc) block and source the .tcl file, it works perfectly but with the way my system is set up I need to use procedures.
Not sure what I am doing wrong.
The “problem” is that the receiving variable (just as with vwait) is located with respect to the global namespace, not the variables that are in the current scope; the flag TCL_GLOBAL_ONLY is used on the call to Tcl_SetVar2Ex in the callback (and Tcl's underlying variable implementation is quite complicated so one really wants to stick to the API if possible):
/*
* Set the result variable
*/
if (Tcl_SetVar2Ex(interp, var, NULL, valObj,
TCL_GLOBAL_ONLY | TCL_LEAVE_ERR_MSG) == NULL) {
rc = TCL_ERROR;
goto cleanup;
}
That makes sense in general, as you could have returned from the procedure between launching the background thread and receiving the result, and Tcl really tries to avoid doing early binding.
So where did the result go? It's in the global result variable (the :: just means “I really mean to use the global variable named this”):
% puts $::result
"Hello thread world"
The easiest fix for this is to use a variable to do the receiving that is unique to the particular call. Which sounds more complicated than it really is, since we've already got a unique token in the thread ID:
proc p1 {} {
set tid [thread::create {
proc executeCommand {command} {
return $command
}
thread::wait
}]
### Make the name of the global variable (an array element) ###
set var ::p1results($tid)
### This is a simple transformation of what you were already doing ###
set $var ""
::thread::send -async $tid [list executeCommand {"Hello thread world"}] $var
#***Do some additional stuff***
vwait $var
### Transfer the global variable into a local and remove the global ###
set result [set $var]
unset $var
### Back to your code now ###
::thread::release $tid
puts $result
return $result
}
This appears to work as expected when I try it out.

how to start multi-thead using a loop rightly in TCL?

I want to get 200 files using multithread, so I modify a TCL example as below.
But the result is strange, the total number of output files is random, about 135. I was confused that how the thread started change the value of variable $thread.
package require Thread
puts "*** I'm thread [thread::id]"
for {set thread 1} {$thread <= 200} {incr thread} {
set thread_ida $thread
tsv::set app global_thread_num $thread_ida
set id [thread::create -joinable {
puts [ tsv::get app global_thread_num ]
set thread_id [ tsv::get app global_thread_num ]
puts "${thread_id}thread_id"
set outFile "./test/${thread_id}"
append outFile ".tmd"
puts $outFile
set FileOut [open $outFile w+]
puts $FileOut "${thread_id}thread_id"
}] ;# thread::create
puts "*** Started thread $id"
lappend threadIds $id
} ;# for
puts "*** Existing threads: [thread::names]"
# Wait until all other threads are finished
foreach id $threadIds {
thread::join $id
}
puts "*** That's all, folks!"
The problem you've got is that these two lines:
puts [ tsv::get app global_thread_num ]
set thread_id [ tsv::get app global_thread_num ]
are not guaranteed to get the same value at all, nor are they at all likely to synchronise with the setting of the shared variable in the outer loop. Threads in Tcl have a reasonable amount of overhead during launch.
Instead, what you should do is make threads with the description of work inside a procedure and then send a simple message to them with the ID to start the real processing; that's much easier to make work.
package require Thread
puts "*** I'm thread [thread::id]"
for {set thread 1} {$thread <= 200} {incr thread} {
set id [thread::create -joinable {
proc DoWork {thread_id} {
# Only one puts here
puts "${thread_id}thread_id"
set outFile "./test/${thread_id}"
append outFile ".tmd"
puts $outFile
set FileOut [open $outFile w+]
puts $FileOut "${thread_id}thread_id"
# Close the channel, please...
close $FileOut
# Thread done, and since we're using joinable threads it should die now
thread::release
}
thread::wait
}] ;# thread::create
puts "*** Started thread $id"
lappend threadIds $id
# Start the work going, passing over the numeric ID in the "message"
thread::send -async $id [list DoWork $thread]
} ;# for
puts "*** Existing threads: [thread::names]"
# Wait until all other threads are finished
foreach id $threadIds {
thread::join $id
}
puts "*** That's all, folks!"
The key things here are that we create a procedure in each thread (DoWork) to receive the message, get the thread to wait for messages with thread::wait, and then launch the work by sending a message in with thread::send -async. The work destroys the thread with thread::release; it needs to do so explicitly otherwise it'll end up back in thread::wait waiting for the next message.
I'd probably use a thread pool in production code, as they're easier to scale to the hardware available in a particular deployment. The DoWork procedure — without the thread::release — would be defined in the pool's -initcmd option. The thread::send -async would be replaced by posting work to the pool, and you'd be waiting for the jobs instead of the threads.
package require Thread
puts "*** I'm thread [thread::id]"
set pool [tpool::create -maxworkers 48 -initcmd {
proc DoWork {thread_id} {
# Only one puts here
puts "${thread_id}thread_id"
set outFile "./test/${thread_id}"
append outFile ".tmd"
puts $outFile
set FileOut [open $outFile w+]
puts $FileOut "${thread_id}thread_id"
# Close the channel, please...
close $FileOut
}
}]
for {set thread 1} {$thread <= 200} {incr thread} {
lappend work [tpool::post -nowait $pool [list DoWork $thread]]
}
# Wait until all work is finished
foreach id $work {
tpool::wait $pool $id
}
puts "*** That's all, folks!"
tpool::release $pool

Providing a status update when "ENTER" is pressed, while program is working

I have a PERL script that loops through and calls a binary with a different argument. I am using IPC::Run. I would like when the user presses a key such as "ENTER" a status message is displayed such as
"Currently working on 14 of 28 total scripts (50% complete)"
My script is as follows:
foreach my $file (#files) {
$file =~ s/$file_dir//;
#Run the test case, store the output in $stdout
run [ "php", "PROGRAM.phar", "$file" ], ">", \my $stdout;
print LOG_FILE "Return code $?\n";
print LOG_FILE "Output: $stdout");
}
Basically how would I interrupt the binary in order to display my status message?
If I correct this usage of IPC::Run is not multithreaded. It will execute the commands one by one and it is not possible to print messages because there is only one process.
Like:
use Parallel::ForkManager;
$pm = new Parallel::ForkManager($MAX_PROCESSES);
my $input;
foreach $data (#all_data) {
# Forks and returns the pid for the child:
my $pid = $pm->start and next;
... do some work with $data in the child process ...
$pm->finish; # Terminates the child process
chomp($input= <STDIN>);
print "Some statistics\n" if $input =~ m!\n!;
}
Regards,

Resources