NodeJS child spawn exits without even waiting for process to finish - node.js

I'm trying to create an Angular11 application that connects to the NodeJS API that would run bash scripts when called and on exit it should either send an error or send a 200 status with a confirmation message.
here is one of the functions from that API. It runs a script called initialize_event.sh, gives it a few arguments when prompted and once the program finishes its course should display a success message (There is no error block for this function):
exports.create_event = function (req, res) {
var child = require("child_process").spawn;
var spawned = child("sh", ["/home/ubuntu/master/initialize_event.sh"]);
spawned.stdout.once("data", function (data) {
spawned.stdin.write(req.body.name + "\n");
});
spawned.stdout.once("data", function (data) {
spawned.stdin.write(req.body.domain_name + "\n");
});
spawned.on("exit", function (err) {
res.status(200).send(JSON.stringify("Event created successfully"));
});
};
The bash script is a long one, but what it basically does is take two variables (event name and domain name) and uses that to create a new event instance. Here are the first few lines of code for the program:
#!/bin/bash
#GET EVENT NAME
echo -n "Enter event name: "; read event;
echo -n "Enter event domain: "; read eventdomain;
#LOAD VARIABLES
export eventdomain;
export event;
export ename=$event-env;
export event_rds= someurl.com ;
export master_rds= otherurl.com;
export master_db=master;
# rest of code...
When called on its own directly from the terminal, the process takes around 30-40 seconds after taking input to create an event and then exits once completed. I can then check the list of events created using another script and the new event would show up in the list. However, when I call this script from the NodeJS function, it manages to take the inputs and the exit within 5 or 6 seconds, saying the event has been created successfully. When I check the list of events there is no event created. I wait to see if the process is still running and check back after a few minutes, still, no event created.
I suspect that the spawn exits before the script can be run completely. I thought that maybe the stdio streams are still open so I tried to use spawned.on.close instead of spawned.on.exit, but still the program exits before it even runs completely. I don't see any exceptions or errors appearing in the Node express console, so I can't really figure out why the program exits successfully without running all the way through.
I've used the same inputs when running from the terminal and on Postman, and have logged them as well to see if there are any empty variables being sent, but found nothing wrong with them either. I've double-checked the paths as well, literally copy-pasted from pwd to make sure I haven't been missing something, but still nothing.
What am I doing wrong here??

So here's the problem I found and solved:
The folder where the Node Express was being served from, and the folder where the bash scripts were saved were in different directories.
Problem:
So basically, whenever I created a child process, it was created with the following current directory:
var/www/html/node/
But the bash scripts were run from:
var/www/html/other/bash/scripts/
so any commands that were added to the bash script that involved directory change (like cd) were relative to the bash directory.
However, since the spawn's current directory was var/www/html/node the script being executed in the spawn also had the same current working directory as the node folder, and any directory changes within the script were now invalid since they didn't exist relative to node directory.
E.g.
When run from terminal:
test.sh -> cd /savedir/ -> /var/www/html/other/bash/scripts/savedir/ -> exists
When run from spawn:
test.sh -> cd /savedir/ -> /var/www/html/node/savedir/ -> Doesn't exist!
Solution:
The easiest way I was able to solve this was to modify the test.sh file. i.e during the start I added cd /var/www/html/other/bash/scripts/. This allowed the current directory of my spawn to change to the right directory that would make all the mv cd and other path relevant commands valid.

Related

Create and communicate with a non-terminating shell via child_process in nodejs

Is there a way to spawn a single child_process in NodeJS and pass it various commands over time keeping the same process open as long as necessary? Sort of like a spawned terminal which accepts commands from Node.
Why? Performance.
I have a NodeJS/Electron application which should execute powershell commands and this is achieved using Node's child_process module. However the performance is not great: there appears to be a couple of seconds overhead each time I spawn a child process (which is to be expected I suppose).
This means that commands such as Get-Date take 600ms instead of a few (2) milliseconds. Other commands take 2+ seconds instead of say 800ms.
Desired workflow:
Start a child powershell process (exec with shell = powershell)
Pass it a command
Get the results (stdout/stderr)
Wait seconds to minutes for the user...
Pass it a second command
Get the results (stdout/stderr)
etc...
Close child process
I have considered writing powershell commands from NodeJS to a file commands.txt. Next I would start a single powershell child_process which watches/tails a file for new commands and executes them, passing the output into another file which the parent (NodeJS) process watches. This seems a bit hacky however...
I have found one solution using spawn and periodically piping input to the process with stdin.write:
const { spawn } = require("child_process");
const ps1 = spawn("C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe", [], {});
console.log("PID", ps1.pid, "started");
ps1.stdout.on('data', (data)=>{
console.log("STDOUT:"+data);
});
ps1.stderr.on('data', (data)=>{
console.log("STDERR:"+data);
});
ps1.on('close', (code, signal) => {
console.log(`child process terminated due to receipt of signal ${signal}`);
});
setInterval(()=>{
ps1.stdin.write("Get-Date\n");
}, 1000);
Results:
PID 7688 started
STDOUT:Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
STDOUT:PS W:\powershell\powowshell\bak>
STDOUT:Get-Date
STDOUT:
STDOUT:Freitag, 17. Mai 2019 17:55:52
STDOUT:
STDOUT:PS W:\powershell\powowshell\bak>
STDOUT:Get-Date
STDOUT:
STDOUT:Freitag, 17. Mai 2019 17:55:53
So now it's "just" a case of stripping whitespace and other fuzz and getting the results.

How to use set -x without showing stdout?

Within CI, I am running a bash script that calls many bash scripts.
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
This doest not disable the stdout returned by the script.
The Gitlabi-CI runners stop logging after 100MB of log, It says Job's log exceeded limit of 10240000 bytes.
I know the log script can only grow up.
How can I optimize the output log size?
I don't need to have all the stdout, I can have stderr but then it will be a long running script without information.
Is there a way to display the commands which is running like when doing set -x?
Edit
Reading the answers, I was not able to solve my issue. I need to add that I am using nodejs to run the bash script that run the long bash script.
This is how I call my node script within .gitlab-ci.yml:
scripts:
- node my_script.js
Within my_script.js, I have:
exports.handler = () => {
const ls = spawn('bash', [path.join(__dirname, 'release.sh')], { stdio: 'inherit' });
ls.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
process.exitCode = code;
}
});
};
Within my_script.sh, I have:
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
You can selectively redirect file handles with exec.
exec >stdout 2>stderr
This however loses the connection to the terminal, so there is no simple way to output anything to the terminal after this point.
You can instead duplicate a file handle with m>&n where m is the number of the file descriptor to duplicate and n is the number of the new one (choose a big number like 99 to not accidentally clobber an existing handle).
exec 98<&1 # stdout
exec 99<&2 # stderr
exec >/dev/null 2>&1
:
To re-enable output,
exec 1<&98 2<&99
If you redirected to a temporary file instead of /dev/null you could obviously now show the tail of those files to the caller.
tail -n 100 "$TMPDIR"/stdout "$TMPDIR"/stderr
(On a shared server, probably use mktemp to create a unique temporary directory at the beginning of your script; static hard-coded file names make it impossible to run two builds at the same time.)
As you usually can't predict where the next error will happen, probably put all of this in a wrapper script which performs the redirection, runs the build, and finally displays the tail end of the temporary log files. Some build servers probably want to see some signs of life in the log file every few minutes, so perhaps tail a few lines every once in a while in a loop, too.
On the other hand, if there is just a single build command, the whole build job's stdout and stderr can simply be redirected to a log file, and you don't need to exec things back and forth. If you need to enable output selectively for portions of the script, use exec as above; but for wholesale redirection, just redirect the one command.
In summary, maybe your build script would look something like this.
#!/bin/sh
t=$(mktemp -t -d cibuild.XXXXXXXX) || exit
trap 'kill $buildpid; wait $buildpid; tail -n 500 "$t"/*; rm -rf "$t"' 0 1 2 3 5 15
# Your original commands here
${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}">"$t"/stdout 2>"$t"/stderr &
buildpid=$!
while kill -0 $buildpid; do
sleep 180
date
tail -n 1 "$t"/*
done
wait
A flaw with this approach is that you lose timing information. A proper solution woud let you see when each line was produced, and display standard output and standard error intermixed in the order the messages were printed, perhaps with visible time stamps, and even with coloring hints (red time stamps for stderr?)
Option 1
If your script will output the error message to stderr, you can ignore all output to stdout by using command > /dev/null, where /dev/null is a black hole that will take away any output to it.
Option 2
If there's any pattern on your error message, you can use grep to filter out those error messages.
Edit 1:
To show the command that is running, you can supply -x command to bash; therefore, your command will be
bash -x ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
bash will print the command executed to stderr
Edit 2:
If you want to reduce the size of the output file, you can pass it to gzip by using ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" | gzip > logfile.
To read the content of the logfile, you can use zcat logfile.

How to correctly execute the cd command from inside of Node.js?

I'm developing a very simple Electron app for Windows which, when executed from the command prompt, opens a dialog box trough which the user can select a folder. The app would then change the command prompt directory to the directory selected by the user.
My end goal is to be able to simply type dirnav, select a folder from the dialog box and have the app take care of redirecting the command prompt to the selected directory (instead of typing cd C:\Users\myName\whateverDirectory. Here's what I have so far:
const exec = require('child_process').exec;
const electron = require('electron');
const {app, dialog} = electron;
app.on('ready', () => {
dialog.showOpenDialog(
{
title: 'Select a directory',
defaultPath: '.',
buttonLabel: 'Select',
properties: ['openDirectory']
}, (responce) => {
exec('cd ' + responce[0], () => {
app.quit();
});
}
);
});
Unfortunately, simply doing exec('cd ' + responce[0]) doesn't seem to work, because instead of changing the directory of the command prompt the application was runned from, it changes the directory of another (unknown to me) command prompt. Is there any way to work around that?
Here's a simple scheme that will work from a batch file:
for /f %%i in ('node yourapp.js') do set NEWDIR=%%i
cd %NEWDIR%
And, my yourapp.js is this (just to prove that the concept works):
process.stdout.write("subdir");
This will end up executing in the batch file:
cd subdir
You should be able to plug in your electron showOpenDialog() in your own app and then just write the result to process.stdout.
The for loop in the batch file does indeed look odd, but it's the only way I found that people have found to get the stdout from an app into an environment variable that you can then use later in the batch file. You could, of course also use a temp file (redirect output to a temp file), but I thought an environment variable was a cleaner solution.

Execute multiple commands in Third party CLI mode using Node JS

I want to execute 5 commands in a sequence and log its output.For Example. First command XXXcli ip_address (This will connect me to the third party CLI mode) and the next commands will execute a script,the next will log output etc.But my problem is when I do SSH through node.js and spawn a shell inside ssh session, when I execute the first command I couldn't see any output on my Console. The Session creates a shell and once the shell enters the third party CLI ,Its becoming impossible for me to fire the next command or log the output of the first command.Kindly help me on this. I'm stuck with this for a long time
Update:
My Code:
session.on('exec', function (accept, reject, info) {
console.log('Client wants to execute: ' + inspect(info.command));
var stream = accept();
var cp = spawn('XXXCLI 10.21.254.12', {shell: true});
stream.stdin.pipe(cp.stdin);
cp.stdout.pipe(stream.stdout);
sleep(6000);
cp.stderr.pipe(stream.stderr);
cp.on('exit', function (code, signal) {
stream.exit(signal || code);
}).on('end', function (code, signal) {
stream.close();
});
});
When I manually type the first command 'XXXCLI ip_address' in my command prompt and press enter,I will get a output "Connected to CLI...." .Once I get this connection successful, I need to execute my second command i.e "Lmc sample" which will load the master config and I will get the output as "Message sent..", third command will execute a script,will get output as "Message sent.." .This is what happens when I enter these commands manually in cmd prompt and execute.
What is happening is once I execute my first command i.e "XXXCLI 10.21.254.12" manually in cmd, The path where we actually execute the commands i.e( C:\users\CLI>) will not be visible. This happens because now it got connected with the above mentioned ip (10.21.254.12) .And Only after connecting to this ip ,I can able to execute my other commands.i.e command to load master config ,cmd to execute script etc.
So I want to execute my first command and get its stream in a variable and execute rest of the commands inside the stream created by first command
Thanks!
I fixed this using Child Process in Node.js and writing the commands in the stream directly. When I did the same with Java it didn't work, but it did in Node.js.

How to restart a group of processes when it is triggered from one of them in C code

i have few processes *.rt written in C.
I want to restart all of them(*.rt) in the process foo.rt(one of the *.rt) in itself (buid-in C code)
Normally i have 2 bash scripts stop.sh and start.sh. These scripts are invoked from shell.
Here are the staffs of the scripts
stop.sh --> send kill -9 signal to all ".rt" files.
start.sh -->invokes processes named ".rt"
My problem is how can i restart all rt's from C code. Is there any Idea to restart all "*.rt" files triggered from foo.rt file ?
I tried to use this in foo.rt but it doesnt work. *Because stop.sh is killing all .rt files even if it is forked as a child which is deployed to execute start.sh script
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
if (fork() == 0) { /* child */
execl("/bin/sh","sh","-c","/sbin/stop.sh",NULL);
execl("/bin/sh","sh","-c","/sbin/start.sh",NULL);// Error:This will be killed by /sbin/stop command
}
}
I'have solved problem with "at" daemon in Linux
I invoke 2 system() calls stop & start.
My first attempt was faulty as explained above. execl make a new image and never returns to later execl unless it is succeed
Here is my solution
case 708: /*There is a trigger signal here*/
{
system("echo '/sbin/start.sh' | at now + 2 min");
system("echo '/sbin/stop.sh | at now + 1 min");
}
You could use process groups, at least if all your related processes are originated by the same process...
So you could write a glue program in C which set a new process group using setpgrp(2) and store its pid (or keeps running, waiting for some IPC).
Then you would stop that process group by using killpg(2).
See also the notion of session and setsid(2)

Resources