How to restart a group of processes when it is triggered from one of them in C code - linux

i have few processes *.rt written in C.
I want to restart all of them(*.rt) in the process foo.rt(one of the *.rt) in itself (buid-in C code)
Normally i have 2 bash scripts stop.sh and start.sh. These scripts are invoked from shell.
Here are the staffs of the scripts
stop.sh --> send kill -9 signal to all ".rt" files.
start.sh -->invokes processes named ".rt"
My problem is how can i restart all rt's from C code. Is there any Idea to restart all "*.rt" files triggered from foo.rt file ?
I tried to use this in foo.rt but it doesnt work. *Because stop.sh is killing all .rt files even if it is forked as a child which is deployed to execute start.sh script
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
if (fork() == 0) { /* child */
execl("/bin/sh","sh","-c","/sbin/stop.sh",NULL);
execl("/bin/sh","sh","-c","/sbin/start.sh",NULL);// Error:This will be killed by /sbin/stop command
}
}

I'have solved problem with "at" daemon in Linux
I invoke 2 system() calls stop & start.
My first attempt was faulty as explained above. execl make a new image and never returns to later execl unless it is succeed
Here is my solution
case 708: /*There is a trigger signal here*/
{
system("echo '/sbin/start.sh' | at now + 2 min");
system("echo '/sbin/stop.sh | at now + 1 min");
}

You could use process groups, at least if all your related processes are originated by the same process...
So you could write a glue program in C which set a new process group using setpgrp(2) and store its pid (or keeps running, waiting for some IPC).
Then you would stop that process group by using killpg(2).
See also the notion of session and setsid(2)

Related

Child Process (created after forking) when ssh to linux machine from windows getting stuck on windows after sometime

The code snippet shown below works in a way that after forking, the child process ssh from windows to Linux machine, and run script1_bkt.csh on linux machine. And logs are dumped to a windows path ($AngleLogFilePath\Angle_${date}.log === V:\ricky_angle_testing_63.1\depot\vfr\63.10\main\logs\angleLogsrickyPersonal\Angle_${date}.log).
Parent Processes(trialSet.pl and trialSetDepWint.pl) runs in the foreground, and works fine.
V: is network filer or CISF for /dv/lmv/mentor/
Issue::
Child process (after the fork process) which ssh onto Linux machine from windows machine and runs script1_bkt.csh get stuck at some point (not everytime).
Point to Note: On a Linux machine(qwelnx45), the PID of script1_bkt.csh doesn't exist after some time which means the process is completed. But, on windows, the PID of ssh.exe (using which script1_bkt.csh is triggered on windows) exists which means on windows, command ($GoToUnix74 cd $ClientAltRoot/lkg ; source script1_bkt.csh ) isn't completed, and got stuck. The script usually takes 3 hrs in completing, but sometimes it never completes as it gets stuck. ::: This script doesn't get stuck everytime.
Also, one more important point: When child process gets stuck on windows, although script1_bkt.csh is finished on Linux, the log file ( $AngleLogFilePath\Angle_${date}.log) doesn't have all the data which script1_bkt.csh gives i.e. Log file is incomplete (Seems because process got stuck, it stopped writing to log file)
CODE SNIPPET:
use File::Path qw( make_path );
my $ClientAltRoot = "/dv/lmv/mentor/ricky_angle_testing_63.1/depot/vfr/63.10/main/";
my $GoToUnix = "C:\\cygwin\\bin\\ssh.exe qwelnx45";
my $AngleLogFilePath = "V:\\ricky_angle_testing_63.1\\depot\\vfr\\63.10\\main\\logs\\angleLogsrickyPersonal";
my $date = strftime("%Y%m%d_%H%M%S", localtime(time));
make_path("$AngleLogFilePath") or warn "Failed to create dir $AngleLogFilePath";
my $aqpid;
# fork angle process
if ($aqpid = fork()) {
$SIG{CHLD} = 'DEFAULT';
} elsif (defined ($aqpid)) {
sleep 10;
print "Angle child started\n";
$angleReturnStatus = system ("$GoToUnix cd $ClientAltRoot/lkg ; source script1_bkt.csh > $AngleLogFilePath\\Angle_${date}.log ");
$angleFailed += 1 if ($angleReturnStatus > 0);
exit 0;
}
print "##### Starting the foreground script ###### \n";
system "$GoToUnix \"cd /home/ricky/; echo abc ; /home/ricky/trialSet.pl > setTrial_ricky/set_${date}.log\" ";
print "Ended SetDep\n";
print "Waiting as child process has not ended";
1 while (wait() != -1);
system ( "perl C:\\ricky\\Testing\\trialSetDepWint.pl ");
print "Demo script ended\n";
Please tell why is the process getting stuck? What could be the possible solution to eliminate this stuck issue?
-Thanks in Advance.
Actually, the issue was due to the following reasons:
Quick Mode was ON. ## I turned off the quick mode on my system as a solution.
Interruption on the command prompt on which run/script is going on. ## Avoid using the machine while the run/script is going on. Because, when you work on the machine, willy-nilly you switch command prompts; and due to this switching, your run sometimes gets stuck until you press "ENTER" or any key.
I followed these two methods, and not seeing the issue now.
Thanks.

NodeJS child spawn exits without even waiting for process to finish

I'm trying to create an Angular11 application that connects to the NodeJS API that would run bash scripts when called and on exit it should either send an error or send a 200 status with a confirmation message.
here is one of the functions from that API. It runs a script called initialize_event.sh, gives it a few arguments when prompted and once the program finishes its course should display a success message (There is no error block for this function):
exports.create_event = function (req, res) {
var child = require("child_process").spawn;
var spawned = child("sh", ["/home/ubuntu/master/initialize_event.sh"]);
spawned.stdout.once("data", function (data) {
spawned.stdin.write(req.body.name + "\n");
});
spawned.stdout.once("data", function (data) {
spawned.stdin.write(req.body.domain_name + "\n");
});
spawned.on("exit", function (err) {
res.status(200).send(JSON.stringify("Event created successfully"));
});
};
The bash script is a long one, but what it basically does is take two variables (event name and domain name) and uses that to create a new event instance. Here are the first few lines of code for the program:
#!/bin/bash
#GET EVENT NAME
echo -n "Enter event name: "; read event;
echo -n "Enter event domain: "; read eventdomain;
#LOAD VARIABLES
export eventdomain;
export event;
export ename=$event-env;
export event_rds= someurl.com ;
export master_rds= otherurl.com;
export master_db=master;
# rest of code...
When called on its own directly from the terminal, the process takes around 30-40 seconds after taking input to create an event and then exits once completed. I can then check the list of events created using another script and the new event would show up in the list. However, when I call this script from the NodeJS function, it manages to take the inputs and the exit within 5 or 6 seconds, saying the event has been created successfully. When I check the list of events there is no event created. I wait to see if the process is still running and check back after a few minutes, still, no event created.
I suspect that the spawn exits before the script can be run completely. I thought that maybe the stdio streams are still open so I tried to use spawned.on.close instead of spawned.on.exit, but still the program exits before it even runs completely. I don't see any exceptions or errors appearing in the Node express console, so I can't really figure out why the program exits successfully without running all the way through.
I've used the same inputs when running from the terminal and on Postman, and have logged them as well to see if there are any empty variables being sent, but found nothing wrong with them either. I've double-checked the paths as well, literally copy-pasted from pwd to make sure I haven't been missing something, but still nothing.
What am I doing wrong here??
So here's the problem I found and solved:
The folder where the Node Express was being served from, and the folder where the bash scripts were saved were in different directories.
Problem:
So basically, whenever I created a child process, it was created with the following current directory:
var/www/html/node/
But the bash scripts were run from:
var/www/html/other/bash/scripts/
so any commands that were added to the bash script that involved directory change (like cd) were relative to the bash directory.
However, since the spawn's current directory was var/www/html/node the script being executed in the spawn also had the same current working directory as the node folder, and any directory changes within the script were now invalid since they didn't exist relative to node directory.
E.g.
When run from terminal:
test.sh -> cd /savedir/ -> /var/www/html/other/bash/scripts/savedir/ -> exists
When run from spawn:
test.sh -> cd /savedir/ -> /var/www/html/node/savedir/ -> Doesn't exist!
Solution:
The easiest way I was able to solve this was to modify the test.sh file. i.e during the start I added cd /var/www/html/other/bash/scripts/. This allowed the current directory of my spawn to change to the right directory that would make all the mv cd and other path relevant commands valid.

Create and communicate with a non-terminating shell via child_process in nodejs

Is there a way to spawn a single child_process in NodeJS and pass it various commands over time keeping the same process open as long as necessary? Sort of like a spawned terminal which accepts commands from Node.
Why? Performance.
I have a NodeJS/Electron application which should execute powershell commands and this is achieved using Node's child_process module. However the performance is not great: there appears to be a couple of seconds overhead each time I spawn a child process (which is to be expected I suppose).
This means that commands such as Get-Date take 600ms instead of a few (2) milliseconds. Other commands take 2+ seconds instead of say 800ms.
Desired workflow:
Start a child powershell process (exec with shell = powershell)
Pass it a command
Get the results (stdout/stderr)
Wait seconds to minutes for the user...
Pass it a second command
Get the results (stdout/stderr)
etc...
Close child process
I have considered writing powershell commands from NodeJS to a file commands.txt. Next I would start a single powershell child_process which watches/tails a file for new commands and executes them, passing the output into another file which the parent (NodeJS) process watches. This seems a bit hacky however...
I have found one solution using spawn and periodically piping input to the process with stdin.write:
const { spawn } = require("child_process");
const ps1 = spawn("C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe", [], {});
console.log("PID", ps1.pid, "started");
ps1.stdout.on('data', (data)=>{
console.log("STDOUT:"+data);
});
ps1.stderr.on('data', (data)=>{
console.log("STDERR:"+data);
});
ps1.on('close', (code, signal) => {
console.log(`child process terminated due to receipt of signal ${signal}`);
});
setInterval(()=>{
ps1.stdin.write("Get-Date\n");
}, 1000);
Results:
PID 7688 started
STDOUT:Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
STDOUT:PS W:\powershell\powowshell\bak>
STDOUT:Get-Date
STDOUT:
STDOUT:Freitag, 17. Mai 2019 17:55:52
STDOUT:
STDOUT:PS W:\powershell\powowshell\bak>
STDOUT:Get-Date
STDOUT:
STDOUT:Freitag, 17. Mai 2019 17:55:53
So now it's "just" a case of stripping whitespace and other fuzz and getting the results.

Can I capture the output of another process that I started?

I'm currently using > /dev/null & to have Perl script A run Perl script B totally independently, and it works fine. Script B runs without throwing back any output and stays alive when Script A ends, even when my terminal session ends.
Not saying I need it, but is there a way to recapture its output if I wanted to?
Thanks
Your code framework may look like this:
#!/usr/bin/perl
# I'm a.pl
#...
system "b.pl > ~/b.out &";
while (1)
{
my $time = localtime;
my ($fsize, $mtime) = (stat "/var/log/syslog")[7,9];
print "syslog: size=$fsize, mtime=$mtime at $time\n";
sleep 60;
}
while the b.pl may look like:
#!/usr/bin/perl
# I'm b.pl
while (1)
{
my $time = localtime;
my $fsize_a = (stat "/var/log/auth.log")[7];
my $fsize_s = (stat "/var/log/syslog")[7];
print "fsize: syslog=$fsize_s auth.log=$fsize_a at $time\n";
sleep 60;
}
a.pl and b.pl do their job independently.
b.pl is called by a.pl as a background job, which sends its output to b.out(won't mess up the screen of a.pl)
You can read b.out from some other terminal, or after a.pl is finished(or when a.pl is put to background temporarily)
About terminating the two scripts:
`ctrl-c` for a.pl
`killall b.pl` for b.pl
Note:
b.pl will never terminate even when you terminate your terminal (Assumed that your terminal is run as a desktop application), so you don't need the `nohup` command to help. (Perhaps only useful in console)
If your b.pl may spit out error messages from time to time, then you still need to deal with its stderr. It's left as your homework.

Child processes in bash

We use bash scripts with asynchonous calls using '&'. Something like this:
function test() {
sleep 1
}
test &
mypid=$!
# do some stuff for two hours
wait $mypid
Usually everything is OK, but sometimes we get error
"wait: pid 419090 is not a child of this shell"
I know that bash keeps child pids in a special table and I know ('man wait') that bash is allowed not to store status information in this table if nobody uses $!, and nobody can state 'wait $mypid'. I suspect, that this optimization contains a bug that causes the error. Does somebody know how to print this table or how to disable this optimization?
I was trying recently something quite similar.
Are you sure that the second process you run concurrently starts before the previous dies? In this case, I think that there is a possibility it takes the same pid with the recently died one.
Also I think we cannot be sure that $! takes the pid of the process we lastly run, because there may be several processes in the background from another functions starting or ending at the same time.
I would suggest using something like this.
mypid=$(ps -ef | grep name_of_your_process | awk ' !/grep/ {print $2} ')
In grep name_of_your_process you can specify some parameters too so as to get the exact process you want.
I hope it helps a bit.
Having written something similar, I suggest the correct strategy is to fork in background BOTH function test and the long running 2 hour stuff.
Then you can wait on a list of pids, invoked in background sorted by the expected running times (fastest first).
The bash(1) wait also allows you to simply wait, for all of the child processes to complete, but that may require a checking protocol for successful completions.
An alternative approach for greater reliability is to use batch queues, with a seperate process started to check for sucessful completions.
You can use gdb to attach to a running shell and see what's happening. On my system I ran yum install bash-debuginfo. I ran gdb and attached to a running shell.
(gdb) b wait_for_single_pid
Breakpoint 1 at 0x441840: file jobs.c, line 2115.
(gdb) c
Continuing.
Breakpoint 1, wait_for_single_pid (pid=11298) at jobs.c:2115
2115 {
(gdb) n
2120 BLOCK_CHILD (set, oset);
(gdb)
2121 child = find_pipeline (pid, 0, (int *)NULL);
(gdb) s
find_pipeline (pid=pid#entry=11298, alive_only=alive_only#entry=0, jobp=jobp#entry=0x0) at jobs.c:1308
1308 {
(gdb)
1313 if (jobp)
(gdb) n
1315 if (the_pipeline)
(gdb)
1329 job = find_job (pid, alive_only, &p);
(gdb) s
find_job (pid=11298, alive_only=0, procp=procp#entry=0x7ffdc053f038) at jobs.c:1364
1364 for (i = 0; i < js.j_jobslots; i++)
(gdb) n
1372 if (jobs[i])
(gdb)
1374 p = jobs[i]->pipe;
(gdb)
1378 if (p->pid == pid && ((alive_only == 0 && PRECYCLED(p) == 0) || PALIVE(p)))
(gdb)
1385 p = p->next;
(gdb)
1387 while (p != jobs[i]->pipe);
There code is traversing the pipe linked lists attached to the jobs array. I didn't encounter any bugs, but perhaps you can spot them with this approach.

Resources