Child Process (created after forking) when ssh to linux machine from windows getting stuck on windows after sometime - linux

The code snippet shown below works in a way that after forking, the child process ssh from windows to Linux machine, and run script1_bkt.csh on linux machine. And logs are dumped to a windows path ($AngleLogFilePath\Angle_${date}.log === V:\ricky_angle_testing_63.1\depot\vfr\63.10\main\logs\angleLogsrickyPersonal\Angle_${date}.log).
Parent Processes(trialSet.pl and trialSetDepWint.pl) runs in the foreground, and works fine.
V: is network filer or CISF for /dv/lmv/mentor/
Issue::
Child process (after the fork process) which ssh onto Linux machine from windows machine and runs script1_bkt.csh get stuck at some point (not everytime).
Point to Note: On a Linux machine(qwelnx45), the PID of script1_bkt.csh doesn't exist after some time which means the process is completed. But, on windows, the PID of ssh.exe (using which script1_bkt.csh is triggered on windows) exists which means on windows, command ($GoToUnix74 cd $ClientAltRoot/lkg ; source script1_bkt.csh ) isn't completed, and got stuck. The script usually takes 3 hrs in completing, but sometimes it never completes as it gets stuck. ::: This script doesn't get stuck everytime.
Also, one more important point: When child process gets stuck on windows, although script1_bkt.csh is finished on Linux, the log file ( $AngleLogFilePath\Angle_${date}.log) doesn't have all the data which script1_bkt.csh gives i.e. Log file is incomplete (Seems because process got stuck, it stopped writing to log file)
CODE SNIPPET:
use File::Path qw( make_path );
my $ClientAltRoot = "/dv/lmv/mentor/ricky_angle_testing_63.1/depot/vfr/63.10/main/";
my $GoToUnix = "C:\\cygwin\\bin\\ssh.exe qwelnx45";
my $AngleLogFilePath = "V:\\ricky_angle_testing_63.1\\depot\\vfr\\63.10\\main\\logs\\angleLogsrickyPersonal";
my $date = strftime("%Y%m%d_%H%M%S", localtime(time));
make_path("$AngleLogFilePath") or warn "Failed to create dir $AngleLogFilePath";
my $aqpid;
# fork angle process
if ($aqpid = fork()) {
$SIG{CHLD} = 'DEFAULT';
} elsif (defined ($aqpid)) {
sleep 10;
print "Angle child started\n";
$angleReturnStatus = system ("$GoToUnix cd $ClientAltRoot/lkg ; source script1_bkt.csh > $AngleLogFilePath\\Angle_${date}.log ");
$angleFailed += 1 if ($angleReturnStatus > 0);
exit 0;
}
print "##### Starting the foreground script ###### \n";
system "$GoToUnix \"cd /home/ricky/; echo abc ; /home/ricky/trialSet.pl > setTrial_ricky/set_${date}.log\" ";
print "Ended SetDep\n";
print "Waiting as child process has not ended";
1 while (wait() != -1);
system ( "perl C:\\ricky\\Testing\\trialSetDepWint.pl ");
print "Demo script ended\n";
Please tell why is the process getting stuck? What could be the possible solution to eliminate this stuck issue?
-Thanks in Advance.

Actually, the issue was due to the following reasons:
Quick Mode was ON. ## I turned off the quick mode on my system as a solution.
Interruption on the command prompt on which run/script is going on. ## Avoid using the machine while the run/script is going on. Because, when you work on the machine, willy-nilly you switch command prompts; and due to this switching, your run sometimes gets stuck until you press "ENTER" or any key.
I followed these two methods, and not seeing the issue now.
Thanks.

Related

NodeJS child spawn exits without even waiting for process to finish

I'm trying to create an Angular11 application that connects to the NodeJS API that would run bash scripts when called and on exit it should either send an error or send a 200 status with a confirmation message.
here is one of the functions from that API. It runs a script called initialize_event.sh, gives it a few arguments when prompted and once the program finishes its course should display a success message (There is no error block for this function):
exports.create_event = function (req, res) {
var child = require("child_process").spawn;
var spawned = child("sh", ["/home/ubuntu/master/initialize_event.sh"]);
spawned.stdout.once("data", function (data) {
spawned.stdin.write(req.body.name + "\n");
});
spawned.stdout.once("data", function (data) {
spawned.stdin.write(req.body.domain_name + "\n");
});
spawned.on("exit", function (err) {
res.status(200).send(JSON.stringify("Event created successfully"));
});
};
The bash script is a long one, but what it basically does is take two variables (event name and domain name) and uses that to create a new event instance. Here are the first few lines of code for the program:
#!/bin/bash
#GET EVENT NAME
echo -n "Enter event name: "; read event;
echo -n "Enter event domain: "; read eventdomain;
#LOAD VARIABLES
export eventdomain;
export event;
export ename=$event-env;
export event_rds= someurl.com ;
export master_rds= otherurl.com;
export master_db=master;
# rest of code...
When called on its own directly from the terminal, the process takes around 30-40 seconds after taking input to create an event and then exits once completed. I can then check the list of events created using another script and the new event would show up in the list. However, when I call this script from the NodeJS function, it manages to take the inputs and the exit within 5 or 6 seconds, saying the event has been created successfully. When I check the list of events there is no event created. I wait to see if the process is still running and check back after a few minutes, still, no event created.
I suspect that the spawn exits before the script can be run completely. I thought that maybe the stdio streams are still open so I tried to use spawned.on.close instead of spawned.on.exit, but still the program exits before it even runs completely. I don't see any exceptions or errors appearing in the Node express console, so I can't really figure out why the program exits successfully without running all the way through.
I've used the same inputs when running from the terminal and on Postman, and have logged them as well to see if there are any empty variables being sent, but found nothing wrong with them either. I've double-checked the paths as well, literally copy-pasted from pwd to make sure I haven't been missing something, but still nothing.
What am I doing wrong here??
So here's the problem I found and solved:
The folder where the Node Express was being served from, and the folder where the bash scripts were saved were in different directories.
Problem:
So basically, whenever I created a child process, it was created with the following current directory:
var/www/html/node/
But the bash scripts were run from:
var/www/html/other/bash/scripts/
so any commands that were added to the bash script that involved directory change (like cd) were relative to the bash directory.
However, since the spawn's current directory was var/www/html/node the script being executed in the spawn also had the same current working directory as the node folder, and any directory changes within the script were now invalid since they didn't exist relative to node directory.
E.g.
When run from terminal:
test.sh -> cd /savedir/ -> /var/www/html/other/bash/scripts/savedir/ -> exists
When run from spawn:
test.sh -> cd /savedir/ -> /var/www/html/node/savedir/ -> Doesn't exist!
Solution:
The easiest way I was able to solve this was to modify the test.sh file. i.e during the start I added cd /var/www/html/other/bash/scripts/. This allowed the current directory of my spawn to change to the right directory that would make all the mv cd and other path relevant commands valid.

Can I capture the output of another process that I started?

I'm currently using > /dev/null & to have Perl script A run Perl script B totally independently, and it works fine. Script B runs without throwing back any output and stays alive when Script A ends, even when my terminal session ends.
Not saying I need it, but is there a way to recapture its output if I wanted to?
Thanks
Your code framework may look like this:
#!/usr/bin/perl
# I'm a.pl
#...
system "b.pl > ~/b.out &";
while (1)
{
my $time = localtime;
my ($fsize, $mtime) = (stat "/var/log/syslog")[7,9];
print "syslog: size=$fsize, mtime=$mtime at $time\n";
sleep 60;
}
while the b.pl may look like:
#!/usr/bin/perl
# I'm b.pl
while (1)
{
my $time = localtime;
my $fsize_a = (stat "/var/log/auth.log")[7];
my $fsize_s = (stat "/var/log/syslog")[7];
print "fsize: syslog=$fsize_s auth.log=$fsize_a at $time\n";
sleep 60;
}
a.pl and b.pl do their job independently.
b.pl is called by a.pl as a background job, which sends its output to b.out(won't mess up the screen of a.pl)
You can read b.out from some other terminal, or after a.pl is finished(or when a.pl is put to background temporarily)
About terminating the two scripts:
`ctrl-c` for a.pl
`killall b.pl` for b.pl
Note:
b.pl will never terminate even when you terminate your terminal (Assumed that your terminal is run as a desktop application), so you don't need the `nohup` command to help. (Perhaps only useful in console)
If your b.pl may spit out error messages from time to time, then you still need to deal with its stderr. It's left as your homework.

How to restart a group of processes when it is triggered from one of them in C code

i have few processes *.rt written in C.
I want to restart all of them(*.rt) in the process foo.rt(one of the *.rt) in itself (buid-in C code)
Normally i have 2 bash scripts stop.sh and start.sh. These scripts are invoked from shell.
Here are the staffs of the scripts
stop.sh --> send kill -9 signal to all ".rt" files.
start.sh -->invokes processes named ".rt"
My problem is how can i restart all rt's from C code. Is there any Idea to restart all "*.rt" files triggered from foo.rt file ?
I tried to use this in foo.rt but it doesnt work. *Because stop.sh is killing all .rt files even if it is forked as a child which is deployed to execute start.sh script
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
if (fork() == 0) { /* child */
execl("/bin/sh","sh","-c","/sbin/stop.sh",NULL);
execl("/bin/sh","sh","-c","/sbin/start.sh",NULL);// Error:This will be killed by /sbin/stop command
}
}
I'have solved problem with "at" daemon in Linux
I invoke 2 system() calls stop & start.
My first attempt was faulty as explained above. execl make a new image and never returns to later execl unless it is succeed
Here is my solution
case 708: /*There is a trigger signal here*/
{
system("echo '/sbin/start.sh' | at now + 2 min");
system("echo '/sbin/stop.sh | at now + 1 min");
}
You could use process groups, at least if all your related processes are originated by the same process...
So you could write a glue program in C which set a new process group using setpgrp(2) and store its pid (or keeps running, waiting for some IPC).
Then you would stop that process group by using killpg(2).
See also the notion of session and setsid(2)

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

How can I change the current directory in a thread-safe manner in Perl?

I'm using Thread::Pool::Simple to create a few working threads. Each working thread does some stuff, including a call to chdir followed by an execution of an external Perl script (from the jbrowse genome browser, if it matters). I use capturex to call the external script and die on its failure.
I discovered that when I use more then one thread, things start to be messy. after some research. it seems that the current directory of some threads is not the correct one.
Perhaps chdir propagates between threads (i.e. isn't thread-safe)?
Or perhaps it's something with capturex?
So, how can I safely set the working directory for each thread?
** UPDATE **
Following the suggestions to change dir while executing, I'd like to ask how exactly should I pass these two commands to capturex?
currently I have:
my #args = ( "bin/flatfile-to-json.pl", "--gff=$gff_file", "--tracklabel=$track_label", "--key=$key", #optional_args );
capturex( [0], #args );
How do I add another command to #args?
Will capturex continue die on errors of any of the commands?
I think that you can solve your "how do I chdir in the child before running the command" problem pretty easily by abandoning IPC::System::Simple as not the right tool for the job.
Instead of doing
my $output = capturex($cmd, #args);
do something like:
use autodie qw(open close);
my $pid = open my $fh, '-|';
unless ($pid) { # this is the child
chdir($wherever);
exec($cmd, #args) or exit 255;
}
my $output = do { local $/; <$fh> };
# If child exited with error or couldn't be run, the exception will
# be raised here (via autodie; feel free to replace it with
# your own handling)
close ($fh);
If you were getting a list of lines instead of scalar output from capturex, the only thing that needs to change is the second-to-last line (to my #output = <$fh>;).
More info on forking-open is in perldoc perlipc.
The good thing about this in preference to capture("chdir wherever ; $cmd #args") is that it doesn't give the shell a chance to do bad things to your #args.
Updated code (doesn't capture output)
my $pid = fork;
die "Couldn't fork: $!" unless defined $pid;
unless ($pid) { # this is the child
chdir($wherever);
open STDOUT, ">/dev/null"; # optional: silence subprocess output
open STDERR, ">/dev/null"; # even more optional
exec($cmd, #args) or exit 255;
}
wait;
die "Child error $?" if $?;
I don't think "current working directory" is a per-thread property. I'd expect it to be a property of the process.
It's not clear exactly why you need to use chdir at all though. Can you not launch the external script setting the new process's working directory appropriately instead? That sounds like a more feasible approach.

Resources