perl-5.24.0 on RH7
I'd like a forked process to kill itself when it determines that it's parent dies. I've read that I can use Linux::Prctl, set_pdeathsig() to do that. But my test of this doesn't seem to work.
#!/usr/bin/env perl
use strict;
my $pid = fork();
die if not defined $pid;
if($pid == 0) {
do_forked_steps();
}
print "====PARENT===\n";
print "Hit <CR> to kill parent.\n";
my $nocare = <>;
exit;
sub do_forked_steps {
system("/home/dgauthie/PERL/sub_fork.pl");
}
And sub_fork.pl is simply...
#!/usr/bin/env perl
use strict;
use Linux::Prctl;
Linux::Prctl::set_pdeathsig(1);
sleep(300);
exit;
(I believe sending "1" tp set_pdeathsig = SIGHUP. But I also tried "9". Same results)
When I run the first script, I can see both procs using ps in another window. When I hit in the script to kill it, I can see that proc go away, but the second one, the forked process, remains.
What am I doing wrong?
You have three processes, not two, because system forks. They are:
The parent process in the parent script ($pid != 0) which waits on <> and calls exit.
The child process, created by fork in the parent script, which calls system. system forks and then waits for its child to exit before returning.
The child process created by system which execs your child script, calls prctl, and sleeps.
When you press enter, process #1 dies, but process #2 does not, and since process #2 is the parent of process #3, the PDEATHSIG is never invoked.
Changing system to exec in your first script, so that a third process isn't created, causes the PDEATHSIG to fire in your toy problem, but without more information it isn't clear if that's suitable in the "real world" version of what you're trying to do.
Related
Is there a way to kill the child_process.exec() from previous request by a new request ex
I have part of code like so :
var proc = require('child_process').exec('ffmpeg -i a.mp4 -o b.avi');
scenario like this
request come -> check the exec() is running or not -> if running kill
it->run new exec() -> return !
is that possible to kill this running process by a new HTTP request?
is there a way to set an app status for Node.js and set a flag, then check the flag to stop the process?
In my opinion, you can kill a child process by two ways.
Solution 1:
Use child process object with kill() function like
proc.kill();
Solution 2:
Get process PID from child process and kill it by nodejs process anywhere in your application (execute kill action in HTTP request) like
// Get process's pid and save it somewhere or global variable
let pid = proc.pid;
At next HTTP request comes, you can kill it by:
// Use node process to kill it anywhere by pid
process.kill(pid);
For your information, if you want to check a child process is running or not, try to check my question here Nodejs how to check independently a process is running by PID?
I am working on a script which needs to spawn an Expect process periodically (every 5 mins) to do some work. Below is the code that I have that spawns an Expect process and does some work. The main process of the script is doing some other work at all times, for example it may wait for user input, because of that I am calling this function 'spawn_expect' in a thread that keeps calling it every 5 minutes, but the issue is that the Expect is not working as expected.
If however I replace the thread with another process, that is if I fork and let one process take care of spawning Expect and the other process does the main work of the script (for example waiting at a prompt) then Expect works fine.
My question is that is it possible to have a thread spawn Expect process ? do I have to resort to using a process to do this work ? Thanks !
sub spawn_expect {
my $expect = Expect->spawn($release_config{kinit_exec});
my $position = $expect->expect(10,
[qr/Password.*: /, sub {my $fh = shift; print $fh "password\n";}],
[timeout => sub {print "Timed out";}]);
# if this function is run via a process, $position is defined, if it is run via a thread, it is not defined
...
}
Create the Expect object beforehand (not inside a thread) and pass it to a thread
my $exp = Expect->spawn( ... );
$exp->raw_pty(1);
$exp->log_stdout(0);
my ($thr) = threads->create(\&login, $exp);
my #res = $thr->join();
# ...
sub login {
my $exp = shift;
my $position = $exp->expect( ... );
# ...
}
I tested with multiple threads, where one uses Expect with a custom test script and returns the script's output to the main thread. Let me know if I should post these (short) programs.
When the Expect object is created inside a thread it fails for me, too. My guess is that in that case it can't set up its pty the way it does that normally.
Given the clarification in a comment I'd use fork for the job though.
Need solution for parallel processing in tcl (windows).
I tried with thread, still not able to achieve desired output.
To simplify My requirement I am giving a simple example as following.
Requirement:
I want to run notepad.exe without effecting my current execution of flow. From main thread control should go to called thread, start notepad.exe and come back to main thread with out closing the notepad .
Tried:(Tcl script)
package require Thread
set a 10
proc test_thread {b} {
puts "in procedure $b"
set tid [thread::create] ;# Create a thread
return $tid
}
puts "main thread"
puts [thread::id]
set ttid [test_thread $a]
thread::send $ttid {exec c:/windows/system32/notepad.exe &}
puts "end"
Getting Output:
running notepad without showing any log.
when closing notepad application I am getting following output.
main thread
tid0000000000001214
in procedure 10
end
Desired output:
main thread
tid0000000000001214
in procedure 10
---->> control should go to thread and run notepad.exe with out effecting main thread flow.
<<-------
end
So kindly help to solve this issue and if appart from thread concept any other is there let me know.
You're using a synchronous thread::send. It's the version that is most convenient for when you want to get a value back, but it does wait. You probably should be using the asynchronous version:
thread::send -async $ttid {exec c:/windows/system32/notepad.exe &}
# ^^^^^^ This flag here is what you need to add
However it is curious that the exec call is behaving as you describe at all; the & at the end should make it effectively asynchronous anyway. Unless there's some sort of nasty interaction with how Windows is interpreting asynchronous subprocess creation in this case.
After calling exec, is it possible to print a message, because I tried and nothing happened. I read some articles about exec but I couldn't find my answer. It replaces the process image with a new one but not creating a new process. Is it something about it? Does it wait for something I mean if I use it in child process, so does it wait for ending child process?
I can give this example:
char *args[6] = { "cat","-b","-t","-v",argv[1],0};
else if(pid == 0){
printf("Child Process ID:%d, Parent ID:%d, Process
Group:%d\n",getpid(),getppid(),getgid());
execv("/bin/cat",args);
printf("AHMET TANAKOL\n");
}
The exec family, like you already read, replaces the process image. That is, it loads the new program, removes your program, and start running the new program in place of your program.
No call to exec functions ever returns, unless there is an error.
Whats going on here? I thought SIGINT would be sent to the foreground process group.
(I think, maybe, that system() is running a shell which is creating a new process group for the child process? Can anyone confirm this?)
% perl
local $SIG{INT} = sub { print "caught signal\n"; };
system('sleep', '10');
Then hit ctrl+d then ctrl+c immediately and notice that "caught signal" is never printed.
I feel like this is a simple thing... anyway to work around this? The problem is that when running a bunch of commands via system results in holding ctrl+c until all iterations are completed (because perl never gets the SIGINT) and is rather annoying...
How can this be worked around? (I already tested using fork() directly and understand that this works... this is not an acceptable solution at this time)
UPDATE: please note, this has nothing to do with "sleeping", only the fact that the command takes some arbitrary long amount of time to run which is considerably more than that of the perl around it. So much so that pressing ctrl+c gets sent to the command (as its in the foreground process group?) and somehow manages to never be sent to perl.
from perldoc system:
Since SIGINT and SIGQUIT are ignored during the execution of system,
if you expect your program to terminate on receipt of these signals you will need to arrange to do so yourself based on the return value.
#args = ("command", "arg1", "arg2");
system(#args) == 0
or die "system #args failed: $?"
If you'd like to manually inspect system's failure, you can check all possible failure
modes by inspecting $? like this:
if ($? == -1) {
print "failed to execute: $!\n";
}
elsif ($? & 127) {
printf "child died with signal %d, %s coredump\n",
($? & 127), ($? & 128) ? 'with' : 'without';
}
else {
printf "child exited with value %d\n", $? >> 8;
}
Alternatively, you may inspect the value of ${^CHILD_ERROR_NATIVE} with the W*() calls from the POSIX module
I don't quite get what you're trying to achieve here... but have you tried simply comparing to:
perl -wle'local $SIG{INT} = sub { print "caught signal"; }; sleep 10;'
Can you explain what effect you're trying to go for, and why you are invoking the shell? Can you simply call into the external program directly without involving the shell?