Perl script does not output STDOUT to file when run from cron - linux

Fairly new to Perl.
I have a Perl script on a Linux machine, which has own logfile. Logfile name can change, dependent on data the script is working on (date, filename, datatype, etc.)
The script at some pionts is calling a native executable with system() call, which gives some information out to STDOUT and STDERR - few tens to few hundreds lines over many minutes. After the executable is done, the script continues and logs some other info to the logfile.
Until now the script only logged its own output, without the native executables output, which I want to log in same files as the Perl script logs to. Tried it with following two methods:
#!/usr/bin/perl
#some other code
#array_executable_and_parameters = qw/echo foo/ ;
open $log_fh, '>>', 'log/logfile1.txt';
*STDOUT = $log_fh;
*STDERR = $log_fh;
print "log_fh=$log_fh\n";
system( #array_executable_and_parameters);
$logfilename='log/logfile2.txt';
open(LOGFILEHANDLE, ">>$logfilename" );
*STDOUT = LOGFILEHANDLE;
*STDERR = LOGFILEHANDLE;
print LOGFILEHANDLE "Somethinglogged\n";
system( #array_executable_and_parameters);
It works when I run the script manually, but not when run from cron.
I know it is possible to redirect in the crontab by Linux means, but then I have to know the filename to log to, which only will be known when some data arrives, so seems to me not feasible. I also would like to have as much as possible inside the script, without many dependencies on the Linux etc. I have also no possibility to install any extra modules, libraries for Perl to use, suppose it is bare minimum install.
How do I get STDOUT and STDERR redirected to a specific file from inside the Perl script?
And if possible, how do I detect filename the STDOUT currently goes to?

Reassigning *STDOUT is only affecting the Perl-internal STDOUT scalar's binding. The proper way to redirect standard output on the system level is something like
open (STDOUT, '>&', $log_fh) or die "$0: could not: $!";
You should similarly report errors from your other system calls which could fail (and use strict and etc).
cron runs your job in your home directory, so if the path $HOME/log does not exist, the script will fail to open the log file handle (silently, because you are not logging open errors!)

Related

How to count number of times a file was executed on linux

I have an executable file and I would like to know how many times it is being executed. The file is located on a network file system. Is there a way to do this with a script using one of Linux utilities? The limitation I have is that I would like to avoid changing the file itself. For example I will not add a file with a counter which would be updated by an executable script. And I will not make the executable script call some API to increment a counter in e.g. database.
I don't know exactly how to watch a file for execution, but you can construct something with inotify watching how many times it is opened:
You could have a script like that:
#! /bin/bash
EXEC_CNT=0
FILE_TO_WATCH=/path/to/your/file
while inotifywait -e open "$FILE_TO_WATCH"
do
((EXEC_CNT++))
echo "$FILE_TO_WATCH opened $EXEC_CNT times"
# Or to store in a file:
# echo "$FILE_TO_WATCH opened $EXEC_CNT times" >> "$FILE_TO_WATCH.log"
done
In case of a network share, this script must be runned on the computer that share its file system.

Linux All Output to a File

Is there any way to tell Linux system put all output(stdout,stderr) to a file?
With out using redirection, pipe or modification the how scrips get called.
Just tell the Linux use a file for output.
for example:
script test1.sh:
#!/bin/bash
echo "Testing 123 "
If i run it like "./test1.sh" (with out redirection or pipe)
i'd like to see "Testing 123" in a file (/tmp/linux_output)
Problem: in the system a binary makes a call to a script and this script call many other scrips. it is not possible to modify each call so If i can modify Linux put "output" to a file i can review the logs.
#!/bin/bash
exec >file 2>&1
echo "Testing 123 "
You can read more about exec here
If you are running the program from a terminal, you can use the command script.
It will open up a sub-shell. Do what you need to do.
It will copy all output to the terminal into a file. When you are done, exit the shell. ^D, or exit.
This does not use redirection or pipes.
You could set your terminal's scrollback buffer to a large number of lines and then see all the output from your commands in the buffer - depending on your terminal window and the options in its menus, there may be an option in there to capture terminal I/O to a file.
Your requirement if taken literally is an impractical one, because it is based in a slight misunderstanding. Fundamentally, to get the output to go in a file, you will have to change something to direct it there - which would violate your literal constraint.
But the practical problem is solvable, because unless explicitly counteracted in the child, the output directions configured in a parent process will be inherited. So you only have to setup the redirection once, using either a shell, or a custom launcher program or intermediary. After that it will be inherited.
So, for example:
cat > test.sh
#/bin/sh
echo "hello on stdout"
rm nosuchfile
./test2.sh
And a child script for it to call
cat > test2.sh
#/bin/sh
echo "hello on stdout from script 2"
rm thisfileisnteither
./nonexistantscript.sh
Run the first script redirecting both stdout and stderr (bash version - and you can do this in many ways such as by writing a C program that redirects its outputs then exec()'s your real program)
./test.sh &> logfile
Now examine the file and see results from stdout and stderr of both parent and child.
cat logfile
hello on stdout
rm: nosuchfile: No such file or directory
hello on stdout from script 2
rm: thisfileisnteither: No such file or directory
./test2.sh: line 4: ./nonexistantscript.sh: No such file or directory
Of course if you really dislike this, you can always always modify the kernel - but again, that is changing something (and a very ungainly solution too).

How to check when a file has been changed in linux?

I have a Linux command line program.
It produces output to a file.
The output file is modified continuously by the program after short time intervals.
Every time, the program changes the file, I want to be notified.
Is there any command line for that, or any script which could help me?
I think icrond is what you need
The incrond (inotify cron daemon) is a daemon which monitors filesystem events (such as add a new file, delete a file and so on) and executes commands or shell scripts. It’s use is generally similar to cron.
Take a look here for some examples http://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/
I think you need
Linux: inotify or
File Alteration monitor or
incron or
Linux audit
Also please look here
Also for script you might need as follows using inotify tool.
while true; do
change=$(inotifywait -e close_write,moved_to,create .)
change=${change#./ * }
if [ "$change" = "myfile" ]
then
echo -e "my file changed"
fi
done

How do I pipe the output of an LS on remote server to the local filesystem via SFTP?

I'm logged into a remote server via SFTP at the command line. The folder I'm in contains hundreds of thousands of files. I need to get a list of these files in a text file so I can access them programmatically, as none of the PHP SFTP clients are able to return such a large list of files.
When I run an ls on the directory ( within the SFTP session ), it takes about 20 minutes for the file list to finally display.
I don't have write access on this server, so I can't pipe the output to a file on the remote server.
How can I pipe the output to a text file on my local machine ... or get a list of the files to my local machine some other way?
If you're willing to wait the 20 minutes for the data to scroll across your screen you can capture all the output using "script".
Call 'script' before you start your ssh or sftp session and it will capture all terminal output to your local disk. Type 'exit' to finish the capture.
NAME
script -- make typescript of terminal session
SYNOPSIS
script [-akq] [-t time] [file [command ...]]
DESCRIPTION
The script utility makes a typescript of everything printed on your ter-
minal. It is useful for students who need a hardcopy record of an inter-
active session as proof of an assignment, as the typescript file can be
printed out later with lpr(1).
If the argument file is given, script saves all dialogue in file. If no
file name is given, the typescript is saved in the file typescript.
If the argument command is given, script will run the specified command
with an optional argument vector instead of an interactive shell.
The following options are available:
-a Append the output to file or typescript, retaining the prior con-
tents.
-k Log keys sent to program as well as output.
-q Run in quiet mode, omit the start and stop status messages.
-t time
Specify time interval between flushing script output file. A
value of 0 causes script to flush for every character I/O event.
The default interval is 30 seconds.
The script ends when the forked shell (or command) exits (a control-D to
exit the Bourne shell (sh(1)), and exit, logout or control-D (if
ignoreeof is not set) for the C-shell, csh(1)).
Certain interactive commands, such as vi(1), create garbage in the type-
script file. The script utility works best with commands that do not
manipulate the screen. The results are meant to emulate a hardcopy ter-
minal, not an addressable one.
ENVIRONMENT
The following environment variable is utilized by script:
SHELL If the variable SHELL exists, the shell forked by script will be
that shell. If SHELL is not set, the Bourne shell is assumed.
(Most shells set this variable automatically).
SEE ALSO
csh(1) (for the history mechanism).
HISTORY
The script command appeared in 3.0BSD.
BUGS
The script utility places everything in the log file, including linefeeds
and backspaces. This is not what the naive user expects.
It is not possible to specify a command without also naming the script
file because of argument parsing compatibility issues.
When running in -k mode, echo cancelling is far from ideal. The slave
terminal mode is checked for ECHO mode to check when to avoid manual echo
logging. This does not work when in a raw mode where the program being
run is doing manual echo.
Wu's answer is good if you do it remotely. Here is another option if you are logged onto the remote server and want to send the file back home to yourself:
Proper answer is here: http://scratching.psybermonkey.net/2011/02/ssh-how-to-pipe-output-from-local-to.html
your_command | ssh username#server "cat > filename.txt"
If you have ssh access, that would be very easy:
ssh user#server ls > foo.txt
Otherwise, you can just redirect sftp's STDOUT and STDERR to a file. You have to type password and commands blindly though.
In my case following worked:
ssh user#server ls /path/to/source/folder/ > /path/to/destination/folder/filenames.txt
I wrote it in Git Bash. This will first ssh then list all files of source folder and then save the file names to the destination text file.
In this way you can also save the output to json file. Just change the file extension to json instead of txt.
For appending output just put ">>" instead of ">".

how to resolve the TCL script error when put it in crontab: "error writing "stdout": bad file number"?

I have a TCL script with function to write error log, but i meet the error as below when i put this script in crontab:
error writing "stdout": bad file number
while executing
"puts $msg"
the code pieces are:
if { $logLevel >= 0 } {
puts $msg
flush stdout
}
but this script can run succeed manually, it only have error when i put it in crontab.
thanks,
Emre
When you run a program from cron, it runs with an unusual environment. In particular, there is no terminal, the environment variables are different, neither stdin nor stdout are normally available, and stderr is redirected so it gets emailed to you if anything fails. As we can see from the error message in your case, stdout is not open (technically, it only says its not open for writing, but even so); puts defaults to writing there if not told otherwise.
The basic fix? Don't write to stdout! Open a file somewhere else and write to that. Alternatively, define a redirection of stdout in your crontab entry so that it goes somewhere definite (and is thus available for writing to from inside your Tcl program).

Resources