Spawn Expect from a perl thread - multithreading

I am working on a script which needs to spawn an Expect process periodically (every 5 mins) to do some work. Below is the code that I have that spawns an Expect process and does some work. The main process of the script is doing some other work at all times, for example it may wait for user input, because of that I am calling this function 'spawn_expect' in a thread that keeps calling it every 5 minutes, but the issue is that the Expect is not working as expected.
If however I replace the thread with another process, that is if I fork and let one process take care of spawning Expect and the other process does the main work of the script (for example waiting at a prompt) then Expect works fine.
My question is that is it possible to have a thread spawn Expect process ? do I have to resort to using a process to do this work ? Thanks !
sub spawn_expect {
my $expect = Expect->spawn($release_config{kinit_exec});
my $position = $expect->expect(10,
[qr/Password.*: /, sub {my $fh = shift; print $fh "password\n";}],
[timeout => sub {print "Timed out";}]);
# if this function is run via a process, $position is defined, if it is run via a thread, it is not defined
...
}

Create the Expect object beforehand (not inside a thread) and pass it to a thread
my $exp = Expect->spawn( ... );
$exp->raw_pty(1);
$exp->log_stdout(0);
my ($thr) = threads->create(\&login, $exp);
my #res = $thr->join();
# ...
sub login {
my $exp = shift;
my $position = $exp->expect( ... );
# ...
}
I tested with multiple threads, where one uses Expect with a custom test script and returns the script's output to the main thread. Let me know if I should post these (short) programs.
When the Expect object is created inside a thread it fails for me, too. My guess is that in that case it can't set up its pty the way it does that normally.
Given the clarification in a comment I'd use fork for the job though.

Related

How does prctl, set_pdeathsig work in perl?

perl-5.24.0 on RH7
I'd like a forked process to kill itself when it determines that it's parent dies. I've read that I can use Linux::Prctl, set_pdeathsig() to do that. But my test of this doesn't seem to work.
#!/usr/bin/env perl
use strict;
my $pid = fork();
die if not defined $pid;
if($pid == 0) {
do_forked_steps();
}
print "====PARENT===\n";
print "Hit <CR> to kill parent.\n";
my $nocare = <>;
exit;
sub do_forked_steps {
system("/home/dgauthie/PERL/sub_fork.pl");
}
And sub_fork.pl is simply...
#!/usr/bin/env perl
use strict;
use Linux::Prctl;
Linux::Prctl::set_pdeathsig(1);
sleep(300);
exit;
(I believe sending "1" tp set_pdeathsig = SIGHUP. But I also tried "9". Same results)
When I run the first script, I can see both procs using ps in another window. When I hit in the script to kill it, I can see that proc go away, but the second one, the forked process, remains.
What am I doing wrong?
You have three processes, not two, because system forks. They are:
The parent process in the parent script ($pid != 0) which waits on <> and calls exit.
The child process, created by fork in the parent script, which calls system. system forks and then waits for its child to exit before returning.
The child process created by system which execs your child script, calls prctl, and sleeps.
When you press enter, process #1 dies, but process #2 does not, and since process #2 is the parent of process #3, the PDEATHSIG is never invoked.
Changing system to exec in your first script, so that a third process isn't created, causes the PDEATHSIG to fire in your toy problem, but without more information it isn't clear if that's suitable in the "real world" version of what you're trying to do.

Parallel processing through tcl script

Need solution for parallel processing in tcl (windows).
I tried with thread, still not able to achieve desired output.
To simplify My requirement I am giving a simple example as following.
Requirement:
I want to run notepad.exe without effecting my current execution of flow. From main thread control should go to called thread, start notepad.exe and come back to main thread with out closing the notepad .
Tried:(Tcl script)
package require Thread
set a 10
proc test_thread {b} {
puts "in procedure $b"
set tid [thread::create] ;# Create a thread
return $tid
}
puts "main thread"
puts [thread::id]
set ttid [test_thread $a]
thread::send $ttid {exec c:/windows/system32/notepad.exe &}
puts "end"
Getting Output:
running notepad without showing any log.
when closing notepad application I am getting following output.
main thread
tid0000000000001214
in procedure 10
end
Desired output:
main thread
tid0000000000001214
in procedure 10
---->> control should go to thread and run notepad.exe with out effecting main thread flow.
<<-------
end
So kindly help to solve this issue and if appart from thread concept any other is there let me know.
You're using a synchronous thread::send. It's the version that is most convenient for when you want to get a value back, but it does wait. You probably should be using the asynchronous version:
thread::send -async $ttid {exec c:/windows/system32/notepad.exe &}
# ^^^^^^ This flag here is what you need to add
However it is curious that the exec call is behaving as you describe at all; the & at the end should make it effectively asynchronous anyway. Unless there's some sort of nasty interaction with how Windows is interpreting asynchronous subprocess creation in this case.

node.js multithreading with max child count

I need to write a script, that takes an array of values and multithreaded way it (forks?) runs another script with a value from array as a param, but so max running forks would be set, so it would wait for script to finish if there are more than n running already. How do I do that?
There is a plugin named child_process, but not sure how to get it done, as it always waits for child termination.
Basically, in PHP it would be something like this (wrote it from head, may contain some syntax errors):
<php
declare(ticks = 1);
$data = file('data.txt');
$max=20;
$child=0;
function sig_handler($signo) {
global $child;
switch ($signo) {
case SIGCHLD:
$child -= 1;
}
}
pcntl_signal(SIGCHLD, "sig_handler");
foreach($data as $dataline){
$dataline = trim($dataline);
while($child >= $max){
sleep(1);
}
$child++;
$pid=pcntl_fork();
if($pid){
// SOMETHING WENT WRONG? NEVER HAPPENS!
}else{
exec("php processdata.php \"$dataline\"");
exit;
}//fork
}
while($child != 0){
sleep(1);
}
?>
After the conversation in the comments, here's how to have Node executing your PHP script.
Since you're calling an external command, there's no need to create a new thread. The Node.js runloop understands that calls to external commands are async operations, and it can execute all of them at the same time.
You can see different ways for executing an external process in this SO question (linked answer may be the best in your case).
However, since you're already moving everything to Node, you may even consider rewriting your "process.php" script to Node.js code. Since, as you explained, that script connects to remote servers and databases and uses nslookup (which you may not really need with Node.js), you won't need any separate thread: they're all async operations that Node.js excels at performing.

after calling exec

After calling exec, is it possible to print a message, because I tried and nothing happened. I read some articles about exec but I couldn't find my answer. It replaces the process image with a new one but not creating a new process. Is it something about it? Does it wait for something I mean if I use it in child process, so does it wait for ending child process?
I can give this example:
char *args[6] = { "cat","-b","-t","-v",argv[1],0};
else if(pid == 0){
printf("Child Process ID:%d, Parent ID:%d, Process
Group:%d\n",getpid(),getppid(),getgid());
execv("/bin/cat",args);
printf("AHMET TANAKOL\n");
}
The exec family, like you already read, replaces the process image. That is, it loads the new program, removes your program, and start running the new program in place of your program.
No call to exec functions ever returns, unless there is an error.

Why doesnt SIGINT get caught here?

Whats going on here? I thought SIGINT would be sent to the foreground process group.
(I think, maybe, that system() is running a shell which is creating a new process group for the child process? Can anyone confirm this?)
% perl
local $SIG{INT} = sub { print "caught signal\n"; };
system('sleep', '10');
Then hit ctrl+d then ctrl+c immediately and notice that "caught signal" is never printed.
I feel like this is a simple thing... anyway to work around this? The problem is that when running a bunch of commands via system results in holding ctrl+c until all iterations are completed (because perl never gets the SIGINT) and is rather annoying...
How can this be worked around? (I already tested using fork() directly and understand that this works... this is not an acceptable solution at this time)
UPDATE: please note, this has nothing to do with "sleeping", only the fact that the command takes some arbitrary long amount of time to run which is considerably more than that of the perl around it. So much so that pressing ctrl+c gets sent to the command (as its in the foreground process group?) and somehow manages to never be sent to perl.
from perldoc system:
Since SIGINT and SIGQUIT are ignored during the execution of system,
if you expect your program to terminate on receipt of these signals you will need to arrange to do so yourself based on the return value.
#args = ("command", "arg1", "arg2");
system(#args) == 0
or die "system #args failed: $?"
If you'd like to manually inspect system's failure, you can check all possible failure
modes by inspecting $? like this:
if ($? == -1) {
print "failed to execute: $!\n";
}
elsif ($? & 127) {
printf "child died with signal %d, %s coredump\n",
($? & 127), ($? & 128) ? 'with' : 'without';
}
else {
printf "child exited with value %d\n", $? >> 8;
}
Alternatively, you may inspect the value of ${^CHILD_ERROR_NATIVE} with the W*() calls from the POSIX module
I don't quite get what you're trying to achieve here... but have you tried simply comparing to:
perl -wle'local $SIG{INT} = sub { print "caught signal"; }; sleep 10;'
Can you explain what effect you're trying to go for, and why you are invoking the shell? Can you simply call into the external program directly without involving the shell?

Resources