I'd like to run a program as root that can intercept other's program stderr and stdout.
For example, say I start a nodejs server and somehow there's an error (with logs printed to stderr), if my program is running, I would like it to intercept this error.
Is that possible ? How should I do ?
Also, an idea that came to my mind was to replace nodejs binary by another one that starts nodejs and redirect stderr to a custom file. but I think it's too messy and I hope there's better ways to do that.
If you can control how nodejs is called you can redirect stderr to a named pipe and then read the named pipe from another command like this:
mkfifo /tmp/nodejs.stderr
nodejs 2>/tmp/nodejs.stderr
Then in some other shell type:
grep "Error Pattern" </tmp/nodejs.stderr
If you can't control how nodejs is called, then you can create a shell script to wrap those commands and call the shell script wherever nodejs is called.
Related
I have a server that runs an express app and a react app. I needed to start both the apps on boot.
So I added two lines on rc.local but it seems like only the first line runs and the second one doesn't. Why is that and how can I solve it?
Just as in any other script, the second command will only be executed after the first one has finished. That's probably not what you want when the first command is supposed to keep running pretty much forever.
If you want the second command to execute before the first has finished, and if you want the script to exit before the second command has finished, then you must arrange for the commands to run in the background.
So, at a minimum, instead of
my-first-command
my-second-command
you want:
my-first-command &
my-second-command &
However, it's better to do something a little more complex that in addition to putting the command into the background also places the command's working directory at the root of the filesystem, disconnects the command's input from the console, delivers its standard output and standard error streams to the syslog service (which will typically append that data into /var/log/syslog) and protects it from unintended signals. Like:
( cd / && nohup sh -c 'my-first-command 2>&1 | logger -t my-first-command &' </dev/null >/dev/null 2>&1 )
and similarly for the second command.
The extra redirections at the end of the line are to keep nohup from emitting unwanted informational messages and creating an unused nohup.out file. You might want to leave the final 2>&1 out until you are sure that the rest of the command is correct and is behaving the way you want it to. When you get to the point where the only message shown is nohup: redirecting stderr to stdout then you can restore the 2>&1 to get rid of that message.
Fairly new to Perl.
I have a Perl script on a Linux machine, which has own logfile. Logfile name can change, dependent on data the script is working on (date, filename, datatype, etc.)
The script at some pionts is calling a native executable with system() call, which gives some information out to STDOUT and STDERR - few tens to few hundreds lines over many minutes. After the executable is done, the script continues and logs some other info to the logfile.
Until now the script only logged its own output, without the native executables output, which I want to log in same files as the Perl script logs to. Tried it with following two methods:
#!/usr/bin/perl
#some other code
#array_executable_and_parameters = qw/echo foo/ ;
open $log_fh, '>>', 'log/logfile1.txt';
*STDOUT = $log_fh;
*STDERR = $log_fh;
print "log_fh=$log_fh\n";
system( #array_executable_and_parameters);
$logfilename='log/logfile2.txt';
open(LOGFILEHANDLE, ">>$logfilename" );
*STDOUT = LOGFILEHANDLE;
*STDERR = LOGFILEHANDLE;
print LOGFILEHANDLE "Somethinglogged\n";
system( #array_executable_and_parameters);
It works when I run the script manually, but not when run from cron.
I know it is possible to redirect in the crontab by Linux means, but then I have to know the filename to log to, which only will be known when some data arrives, so seems to me not feasible. I also would like to have as much as possible inside the script, without many dependencies on the Linux etc. I have also no possibility to install any extra modules, libraries for Perl to use, suppose it is bare minimum install.
How do I get STDOUT and STDERR redirected to a specific file from inside the Perl script?
And if possible, how do I detect filename the STDOUT currently goes to?
Reassigning *STDOUT is only affecting the Perl-internal STDOUT scalar's binding. The proper way to redirect standard output on the system level is something like
open (STDOUT, '>&', $log_fh) or die "$0: could not: $!";
You should similarly report errors from your other system calls which could fail (and use strict and etc).
cron runs your job in your home directory, so if the path $HOME/log does not exist, the script will fail to open the log file handle (silently, because you are not logging open errors!)
How can I redirect everything what is displayed in console to a file?
I mean for example, i call some function, this function display something on console (no metter if it is console.log or process.stdout.write)?
Thanks for help!
While not strictly a Node.js answer, you could achieve this at the shell level. For example if you are using bash, you could redirect both standard output and error stream to a file using the following
#!/bin/bash
node app.js &> output.log
Check out also tee command for simultaneous output to both file and screen.
I'm running a simple apache web server on Linux (Ubuntu 14.04) with a perl CGI script handling some requests. The script initiates a system command using the system function, but I want it to return immediately, regardless of the outcome of the system call.
I've been adding an ampersand to the end of the scalar argument passed to system (I am aware of the implications of command injection attacks) and although this does cause the system command to return immediately, the script will still not exit until the underlying command has completed.
If I trigger a dummy ruby script with a 10 second sleep using the system call from the perl CGI, then my request to the web server still waits 10 seconds before finally getting a response. I put a log statement after the system call and it appears immediately when the web request is made, so the system call is definitely returning immediately, but the script is still waiting at the end.
This question is similar, but neither of the solutions have worked for me.
Here's some example code:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use Log::Log4perl qw(:easy);
Log::Log4perl->easy_init(
{ level => $DEBUG, file => ">>/var/log/script.log" } );
print "Content-type: application/json\n\n";
my $cgi = CGI->new();
INFO("Executing command...");
system('sudo -u on-behalf-of-user /tmp/test.rb one two &');
INFO("Command initiated - will return now...");
print '{"error":false}';
Edit:
The command call is executed using sudo -u because the apache user www-data needs permission to execute the script on behalf of the script owner, and I've updated my sudoers file appropriately to that end. This is not the cause of my issue, because I've also tried changing script ownership to www-data and running system("/tmp/test.rb one two &") but result is the same.
Edit 2:
I've also tried adding exit 0 to the very end of the script, but it doesn't make any difference. Is it possible that the script is exiting immediately, but the apache server is holding onto the response until the script the perl CGI called is finished? Or is it possible that some setting or configuration of the operating system is causing the problem?
Edit 3:
Running the perl CGI script directly from a terminal works correctly. The perl script ends immediately, so this is not an inherent issue with Perl. Which presumably can only mean that the Apache web server is holding onto the request until the command called from system is finished. Why?
The web server creates a pipe from which to receive the response. It waits for the the pipe to reach EOF before completing the request. A pipe reaches EOF when all copies of the writer handle are closed.
The writer end of the pipe is set as the child's STDOUT. That file handle was copied to be the shell's STDOUT, and again to the mycmd's STDOUT. So even though the CGI script and the shell ended and thus closed their ends of the file handle, mycmd still holds the handle open, so the web server is still waiting for the response to complete.
All you have to do with to close the last handle to the writer end of the pipe. Or more precisely, you can avoid making it in the first place by attaching a different handle to mycmd STDOUT.
mycmd arg1 arg2 </dev/null >/dev/null 2>&1 &
I have a script written in node.js, it uses 'net' library and communicates with distant service over tcp. This script is started using 'node script.js >> log.txt' command and everything in that script that is logged using console.log() function gets written to log.txt but sometimes script dies and I cannot find a reason and nothing gets logged in log.txt around the time script crashed.
How can I capture crash reason?
Couldn't you listen to uncaughtException event. Something along the lines of =>
process.on('uncaughtException', function (err) {
console.log('Caught exception: ' + err);
});
P.S: after that you are adviced to restart process according to this article from Felix Geisendörfer
It's much easier to capture exceptions by splitting stdout and stderr. Like so:
node script.js 1> log.out 2> err.out
By default, node logs normal output to stdout, which I believe you are capturing with >>.
As noted in the comments, a segmentation fault is a signal put to stderr by the shell, not necessarily your program. See this unix.stackexchange answer for other options.