I know this has been asked countless times but I am looking for a solution that uses crond's native log function. I do not want to pipe the output of each cron and prepend the timestamp.
I am launching crond like this:
crond -L /var/log/cron.log -f
the logs are like this:
crond: crond (busybox 1.30.1) started, log level 8
crond: USER root pid 16 cmd echo "hello"
crond: USER root pid 18 cmd echo "hello"
crond: USER root pid 19 cmd echo "hello"
I'd like to add the timestamp before the line. I do not want to add some stdout command to each individual cron and prepend the date.
Maybe I could watch the file and append to each new line or something? How do I get access to crond's stream and modify it?
I believe that the answer is that it's not possible to modify the crond output file.
The actual implementation detail of the cron do not make it easy to control the log file for individual jobs. Also, the crond is running as root, which will make it hard to user jobs to change the file. Trying to change the file, while crond is running will likely result in problems.
Consider instead the following option
Write a process that will tail -f the log file, and create a new log file, with each line prefixed by the timestamp.
Run the process at boot time.
tail -f /var/log/cron.log | while read x ; do echo "$(date) $x" ; done >> /var/log/cron-ts.log
Or configure to whatever format you need.
Related
I've written a program that is suppose to run for a long time and it outputs the progress to stdout, however, under some circumstances it begins to hang and the easiest thing to do is to restart it.
My question is: Is there a way to do something that would kill the process only if it had no output for a specific number of seconds?
I have started thinking about it, and the only thing that comes to mind is something like this:
./application > output.log &
tail -f output.log
then create script which would look at the date and time of the last modification on output.log and restart the whole thing.
But it looks very tedious, and i would hate to go through all that if there were an existing command for that.
As far as I know, there isn't a standard utility to do it, but a good start for a one-liner would be:
timeout=10; if [ -z "`find output.log -newermt #$[$(date +%s)-${timeout}]`" ]; then killall -TERM application; fi
At least, this will avoid the tedious part of coding a more complex script.
Some hints:
Using the find utility to compare the last modification date of the output.log file against a time reference.
The time reference is returned by date utility as the current time in seconds (+%s) since EPOCH (1970-01-01 UTC).
Using bash $[] operation to subtract the $timeout value (10 seconds on the example)
If no output is returned from the above find, then the file wasn't changed for more than 10 seconds. This will trigger a true in the if condition and the killall command will be executed.
You can also set an alias for that, using:
alias kill_application='timeout=10; if [ -z "`find output.log -newermt #$[$(date +%s)-${timeout}]`" ]; then killall -TERM application; fi';
And then use it whenever you want by just issuing the command kill_application
If you want to automatically restart the application without human intervention, you can install a crontab entry to run every minute or so and also issue the application restart command after the killall (Probably you may also want to change the -TERM to -KILL, just in case the application becomes unresponsive to handleable signals).
The inotifywait could help here, it efficiently waits for changes to files. The exit status can be checked to identify if the event (modify) occurred in the specified interval of time.
$ inotifywait -e modify -t 10 output.log
Setting up watches.
Watches established.
$ echo $?
2
Some related info from man:
OPTIONS
-e <event>, --event <event>
Listen for specific event(s) only.
-t <seconds>, --timeout <seconds>
Exit if an appropriate event has not occurred within <seconds> seconds.
EXIT STATUS
2 The -t option was used and an event did not occur in the specified interval of time.
EVENTS
modify A watched file or a file within a watched directory was written to.
I would like to run a specific script at a certain time (only once!). If I run it normally like this:
marc#Marc-Linux:~/tennis_betting_strategy1/wrappers$ Rscript write_csv2.R
It does work. I however would like to program it in a cronjob to run at 10:50 and therefore did the following:
50 10 11 05 * Rscript ~/csv_file/write_csv.R
This does not seem to work however. Any thoughts where I go wrong? These are the details of the cron package im
using:
PID COMMAND
1015 cron
My system time also checks out:
marc#Marc-Linux:~/tennis_betting_strategy1/wrappers$ date
wo mei 11 10:56:46 CEST 2016
There is a special tool for running commands only once - at.
With at you can schedule a command like this:
at 09:05 am today
at> enter you commands...
Note, you'll need the atd daemon running.
Your crontab entry looks okay, however. I'd suggest checking if the cron daemon is running(exact daemon name depends on the cron package; it could be cron, crond, or vixie-cron, for instance). One way to check if the daemon is running is to use the ps command, e.g.:
$ ps -C cron -o pid,args
PID COMMAND
306 /usr/sbin/cron
Some advices.
Read more about the PATH variable. Notice that it is set differently in interactive shells (see your ~/.bashrc) and in cron or at jobs. See also this about Rscript.
Replace your command by a shell script, e.g. in ~/bin/myrscriptjob.sh
That myrscriptjob.sh file should start with #!/bin/sh
Be sure to make that shell script executable:
chmod u+x ~/bin/myrscriptjob.sh
Add some logging in your shell script, near the start; either use logger(1) or at least some date(1) command suitably redirected, or even both:
#!/bin/sh
# file myrscriptjob.sh
/bin/date +"myrscriptjob starting %c %n" > /tmp/myrscriptjob.start
/usr/bin/logger -t myrscript job starting $$
/usr/local/bin/Rscript $HOME/csv_file/write_csv.R
in the last line above, replace /usr/local/bin/Rscript by the output of which Rscript done in some interactive terminal.
Notice that you should not use ~ (but replace them with $HOME when appropriate) in shell scripts.
Finally, use at to run your script once. If you want to run it periodically in a crontab job, give the absolute path, e.g.
5 09 11 05 * $HOME/bin/myrscriptjob.sh
and check in /tmp/myrscriptjob.start and in your system log if it has started successfully.
BTW, in your myrscriptjob.sh script, you might replace the first line #!/bin/sh with #!/bin/sh -vx (then the shell is verbose about execution, and cron or at will send you some email). See dash(1), bash(1), execve(2)
Use full path (started from /) for both Rscript and write_csv2.R. Check sample command as follows
/tmp/myscript/Rscript /tmp/myfile/write_csv2.R
Ensure you have execution permission of Rscript and write permission in folder where write_csv2.R will be created(/tmp/myfile)
Hello I have such problem. I have init scripts, and I must run syslogd (busybox), so I have such code:
...
"$__start_program" $OPTIONS
....
If I
echo "$__start_program $OPTIONS"
it prints
/sbin/syslogd -s 512 -l 6 -L -O "/var/log/a.log"
I see this process in ps, but actually syslog don't start (there is no messages in log file about start and logger don't write something to log at all). But if I run this script manually from command line (with same arguments) it works fine. Can some one help me with this problem?
Don't use a string to store commands, that's not what they're for. The link provided in the comments contains some good discussion on the potential problems that this may cause.
It's not clear from the question where one string starts and the other ends but you should use a function to achieve what you are trying to do. Something like this:
log_daemon() {
param_s="$1"
logfile="$2"
/sbin/syslogd -s "$param_s" -l 6 -L -O "$logfile"
}
Then call it from your script like:
log_daemon 512 /var/log/a.log
in a shell script i have a command like, pid -p PID, after that i have some more commands. but as soon as the pid -p PID command runs we should supply a control+C to exit from it and then only the further commands executes. so i wanna do this periodically, i have all the things i want in a shell script and i wanna put this into crontab. the only thing that bothers is, if i schedule this script in the crontab, afetr its first execution, the command pid -p PID, how will i supply the CONTRO+C command for allowing further commands to execute???? please help
my script is like this.. very simple one
top -p $1
free -m
netstat -antp|grep 3306|grep $1
jmap -dump:file=my_stack$RANDOM.bin $1
You can send signals with kill. In your case however, you can just restrict top to one or a few iterations
top -p $1 -n 1
Update:
You can redirect the output of a command to a file. Either overwrite the file each time
command.sh >file.txt 2>&1
or append to a file
command.sh >>file.txt 2>&1
If you don't want the error output, leave out the 2>&1 part.
pid -p PID &
some_pid=$!
kill -s INT $some_pid
I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.