Node.js command get error while using pgrep - node.js

In my code, I need to check if a program is started. To do so, I have a function 'running':
function running(app, callback) {
var arg = 'pgrep --count ' + app;
exec( arg, function(err, stdout, stderr) {
if (err) {
console.log('Error:' + inspect(err) + ' ' + stderr);
callback('0');
} else {
var data = '' + stdout;
callback(data.charAt(0)); //Will be 0 only if no app is started
}
});
}
It worked well for some times, but now I get:
Error: { [Error: Command failed: ]
[stack]: [Getter/Setter],
[arguments]:undefined,
[type]: undefined,
[message]: 'Command failed: ',
killed: false,
code: 1,
signal: null }
(stderr is empty)
I don t understand why and so can t think of any solution.
Does anyone could tell me why do I get this error?

pgrep will return a non-zero status if there are no processes matching your request. Node will interpret this non-zero status as meaning that pgrep failed. This can easily be checked at the shell, by using echo $? which shows you the exit status of the previous command. Assuming you have some bash instances running
$ pgrep --count bash; echo $?
You'll see on the console the number of bash instance running and the exit code which will be 0. Now, if you try with something that does not exist:
$ pgrep --count nonexistent; echo $?
You'll see a count of 0 and an exit status of 1.
Here is what the man page for pgrep says about the exit status:
EXIT STATUS
0 One or more processes matched the criteria.
1 No processes matched.
2 Syntax error in the command line.
3 Fatal error: out of memory etc.
So you could check the result with something like this:
var count;
if (err) {
if (err.code === 1)
count = 0; // Status 1 means no match, so we don't have to parse anything.
else {
// Real error, fail hard...
}
}
else {
count = ... ; // parse count from stdout
}
callback(count);

var arg = 'pgrep --count ' + app,
There are two issues here:
on Linux, it's not --count it's -c
that line should end with a ;, not a comma.

Related

Is it possible to bubble up a nodejs exit code in jenkins powershell and use that exit code to set build status?

I have a nodeJS script that returns non-zero exit codes and I'm running this script with Powershell plugin in Jenkins in a pipeline job. I would like to use these exit codes in the pipeline to set build statuses. I can see the non-zero exit codes e.g. with echo in powershell, but using exit $LastExitCode always exits with code 1.
Here's what I currently have:
def status = powershell(returnStatus: true, script: '''
try{
node foo.js
echo $LastExitCode
exit $LastExitCode
} catch {
exit 0
}
''')
println status
if(status == 666) {
currentBuild.result = 'UNSTABLE'
return
} else if(status == 1) {
currentBuild.result = 'FAILURE'
return
} else {
currentBuild.result = 'SUCCESS'
return
}
The "foo.js" file there is very simple:
console.log("Hello from a js file!");
process.exit(666);
That above code sets build status to failure, since println status prints "1". So, my question is that is it even possible to bubble the non-zero custom exit codes to be used in the pipeline code through Powershell plugin somehow? Or is there some other way to achieve this, totally different from what I'm trying to do here?
UPDATE:
Eventually I scrapped the idea of exit codes for now, and went with an even uglier, hackier way :-(
import org.apache.commons.lang.StringUtils
def filterLogs(String filter_string, int occurrence) {
def logs = currentBuild.rawBuild.getLog(10000).join('\n')
int count = StringUtils.countMatches(logs, filter_string);
if (count > occurrence -1) {
currentBuild.result='UNSTABLE'
}
}
And later in the pipeline after the nodeJS script has run:
stage ('Check logs') {
steps {
filterLogs ('Some specific string I console.log from NodeJS', 1)
}
}
This solution I found here Jenkins Text finder Plugin, How can I use this plugin with jenkinsfile? as an answer to that question.
If that's the only way, I guess I'll have to live with that then.

What does the number after `exit` command in expect scripts mean

I am looking at an expect script at it has the following lines:
#some heading
send -- "some command\n"
expect {
-re $more {
send -- " "
exp_continue
}
">" { }
default { exit 230 }
}
# some heading
send -- "some command\n"
expect {
-re $more {
send -- " "
exp_continue
}
">" { }
default { exit 211 }
}
So what do the numbers "230" and "211" mean after the exit command.
The numbers are exit codes. They range from 0-255 and are used to communicate program success or errors to other applications that might invoke that program (e.g. your shell).
In bash and many other shells, you can check the exit status of the last program using $?. An exit status of 0 indicates success, any non-0 status means failure. You should refer to the program's documentation to see what the different exit codes could mean.
See also the Wikipedia entry on exit status.

pgrep --count program is always returning 0, even when there obviously a instance of the program

I have a app which restart itself on error.
Since the initializing can take some time, I use a other application to play a loading video.
Since the initializing depend on internet connection, it can restart a lot of time, so I need to start the video once, after checking if a instance of the program is not already started.
I thought I could do so like this:
var arg = 'pgrep --count omxplayer | echo $?';
exec( arg, function(err, stdout, stderr) {
var data = '' + stdout[0];
console.log(data);
if (data === '0') {
callback(true);
} else {
callback(false);
}
});
The callback start omxplayer if the argument is false.
Problem is, when I look at my log, I can see that data is always 0, which make the app start as much of omxplayer as there is restart.
I have the same problem with pkill -0 omxplayer | echo $?
How can I check if omxplayer is running or not? Or how do I fix my code?
I got it, the error was not in pgrep, but on how I handled stdout.
stdout[0] seems to don t do anything, and data[0] neither.
How I was supposed to get it was data.charAt(0), which return '0' only when there is no omxplayer started.
var arg = 'pgrep --count omxplayer | echo $?';
exec( arg, function(err, stdout, stderr) {
var data = '' + stdout;
data = data.charAt(0);
console.log(data);
if (data === '0') {
callback(true);
} else {
callback(false);
}
});

Node.js spawning a child process interactively with separate stdout and stderr streams

Consider the following C program (test.c):
#include <stdio.h>
int main() {
printf("string out 1\n");
fprintf(stderr, "string err 1\n");
getchar();
printf("string out 2\n");
fprintf(stderr, "string err 2\n");
fclose(stdout);
}
Which should print a line to stdout, a line to stderr, then wait for user input, then another line to stdout and another line to stderr. Very basic!
When compiled and run on the command line the output of the program when complete (user input is received for getchar()):
$ ./test
string out 1
string err 1
string out 2
string err 2
When trying to spawn this program as a child process using nodejs with the following code:
var TEST_EXEC = './test';
var spawn = require('child_process').spawn;
var test = spawn(TEST_EXEC);
test.stdout.on('data', function (data) {
console.log('stdout: ' + data);
});
test.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
// Simulate entering data for getchar() after 1 second
setTimeout(function() {
test.stdin.write('\n');
}, 1000);
The output appears like this:
$ nodejs test.js
stderr: string err 1
stdout: string out 1
string out 2
stderr: string err 2
Very different from the output as seen when running ./test in the terminal. This is because the ./test program isn't running in an interactive shell when spawned by nodejs. The test.c stdout stream is buffered and when run in a terminal as soon as a \n is reached the buffer is flushed but when spawned in this way with node the buffer isn't flushed. This could be resolved by either flushing stdout after every print, or changing the stdout stream to be unbuffered so it flushes everything immediately.
Assuming that test.c source isn't available or modifiable, neither of the two flushing options mentioned can be implemented.
I then started looking at emulating an interactive shell, there's pty.js (pseudo terminal) which does a good job, for example:
var spawn = require('pty.js').spawn;
var test = spawn(TEST_EXEC);
test.on('data', function (data) {
console.log('data: ' + data);
});
// Simulate entering data for getchar() after 1 second
setTimeout(function() {
test.write('\n');
}, 1000);
Which outputs:
$ nodejs test.js
data: string out 1
string err 1
data:
data: string out 2
string err 2
However both stdout and stderr are merged together (as you would see when running the program in a terminal) and I can't think of a way to separate the data from the streams.
So the question..
Is there any way using nodejs to achieve the output as seen when running ./test without modifying the test.c code? Either by terminal emulation or process spawning or any other method?
Cheers!
I tried the answer by user568109 but this does not work, which makes sense since the pipe only copies the data between streams. Hence, it only gets to process.stdout when the buffer is flushed...
The following appears to work:
var TEST_EXEC = './test';
var spawn = require('child_process').spawn;
var test = spawn(TEST_EXEC, [], { stdio: 'inherit' });
//the following is unfortunately not working
//test.stdout.on('data', function (data) {
// console.log('stdout: ' + data);
//});
Note that this effectively shares stdio's with the node process. Not sure if you can live with that.
I was just revisiting this since there is now a 'shell' option available for the spawn command in node since version 5.7.0. Unfortunately there doesn't seem to be an option to spawn an interactive shell (I also tried with shell: '/bin/sh -i' but no joy).
However I just found this which suggests using 'stdbuf' allowing you to change the buffering options of the program that you want to run. Setting them to 0 on everything produces unbuffered output for all streams and they're still kept separate.
Here's the updated javascript:
var TEST_EXEC = './test';
var spawn = require('child_process').spawn;
var test = spawn('stdbuf', ['-i0', '-o0', '-e0', TEST_EXEC]);
test.stdout.on('data', function (data) {
console.log('stdout: ' + data);
});
test.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
// Simulate entering data for getchar() after 1 second
setTimeout(function() {
test.stdin.write('\n');
}, 1000);
Looks like this isn't pre-installed on OSX and of course not available for Windows, may be similar alternatives though.
You can do this :
var TEST_EXEC = 'test';
var spawn = require('child_process').spawn;
var test = spawn(TEST_EXEC);
test.stdin.pipe(process.stdin);
test.stdout.pipe(process.stdout);
test.stderr.pipe(process.stderr);
When you use events on stdout and stderr to print the output on console.log, you will get jumbled output because of asynchronous execution of the functions. The output will be ordered for a stream independently, but output can still get interleaved among stdin,stdout and stderr.

Need to call child_process.exec twice for it to work

In an node.js application I have the following code that is supposed to start a program called timestamp:
var exec = require('child_process').exec, child;
child = exec('./timestamp 207.99.83.228 5000 -p 5500 &', function (error, stdout, stderr) {
if (error !== null) {
console.log('exec error: ' + error);
} else {
// Code to be executed after the timestamp program has started
...
}
});
However, this will not start the timestamp program unless I preceed the call to exec by this:
exec('./timestamp 207.99.83.228 5000 -p 5500 &', null);
If I leave out this line, there is nothing, not even an error message.
So, in order to successfully start one instance of the program, I have to call exec twice. Is that some bug in node.js or the ChildProcess class or am I missing something here?

Resources