Why part of the script cannot execute in the crontab - linux

I have a script stopping the application and zipping some files:
/home/myname/project/stopWithZip.sh
With the properties below:
-rwxrwxr-x. 1 myname myname778 Jun 25 13:48 stopWithZip.sh
Here is the content of the script:
ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15
month=`date +%m`
year=`date +%Y`
fixLogs=~/project/log/fix/$year$month/*.log.*
errorLogs=~/project/log/error/$year$month/log.*
for log in $fixLogs
do
if [ ! -f "$log.gz" ];
then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived fix log files done"
for log in $errorLogs
do
if [ ! -f "$log.gz" ]; then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived errorlog files done"
The problem is except this ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15 command, other gzip commands are not executed. I totally don't understand why.
I cannot see any compression of the logs in the directory.
BTW, when I execute the stopWithZip.sh explicitly in command line, it works perfectly fine.
In crontab:
00 05 * * 2-6 /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1 (NOT work)
In command line:
/home/myname/project>./stopWithZip.sh (work)
Please help

The script fails when run under cron because your script is invoked with project in its path, so the kill pipeline kills the script too.
You could prove (or disprove) this by adding some tracing. Log the output of ps and of awk to log files:
ps -ef |
tee /tmp/ps.log.$$ |
grep project |
grep -v grep |
awk '{print $2}' |
tee /tmp/awk.log.$$ |
xargs kill -15
Review the logs and see that your script is one of the processes being killed.
The crontab entry contains:
/home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
When ps lists that, it contains 'project' and does not contain 'grep' so the kill in the script kills the script itself.
When you run it from the command line (using a conventional '$' as the prompt), you run:
$ ./stopWithZip.sh
and when ps lists that, it does not contain 'project' so it is not killed.
If you ran:
$ /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
from the command line, like you do with cron (crontab), you would find it fails.

Related

Using ssh inside a script to run another script that itself calls ssh

I'm trying to write a script that builds a list of nodes then ssh into the first node of that list
and runs a checknodes.sh script which it's self is just a for i loop that calls checknode.sh
The first 2 lines seems to work ok, the list builds successfully, but then I get either get just the echo line of checknodes.sh to print out or an error saying cat: gpcnodes.txt: No such file or directory
MYSCRIPT.sh:
#gets the master node for the job
MASTERNODE=`qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' | head -n 1`
#builds list of nodes in job
ssh -qt $MASTERNODE "qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' > /users/issues/slow_job_starts/gpcnodes.txt"
ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/
ssh -qt $MASTERNODE /users/issues/slow_job_starts/checknodes.sh
checknodes.sh
for i in `cat gpcnodes.txt `
do
echo "### $i ###"
ssh -qt $i /users/issues/slow_job_starts/checknode.sh
done
checknode.sh
str=`hostname`
cd /tmp
time perf record qhost >/dev/null 2>&1 | sed -e 's/^/${str}/'
perf report --pretty=raw | grep % | head -20 | grep -c kernel.kallsyms | sed -e "s/^/`hostname`:/"
When ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/ is finished, the changed directory is lost.
With the backquotes replaced by $(..) (not an error here, but get used to it), the script would be something like
for i in $(cat /users/issues/slow_job_starts/gpcnodes.txt)
do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done
or better
while read -r i; do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done < /users/issues/slow_job_starts/gpcnodes.txt
Perhaps you would also like to change your last script (start with cd /users/issues/slow_job_starts)
You will find more problems, like sed -e 's/^/${str}/' (the ${str} inside single quotes won't be replaced by a host), but this should get you started.
EDIT:
I added option -n to the ssh call.
Redirects stdin from /dev/null (actually, prevents reading from stdin).
Without this option only one node is checked.

My bash script won't execute commands after kill command

I am trying to make a bash script that is killing a process and then it's going to do other stuff.
PID=`ps -ef | grep logstash | grep -v "grep" | awk '{print $2}'`
echo $PID
kill -9 $PID
echo "logstash process is stopped"
rm /home/user/test.csv
echo "test.csv is deleted."
rm /home/example.txt
echo "example.txt is deleted."
When I run the script, it kills logstash as exptected but it terminates also my whole script.
I've also tried: kill -9 $(ps aux | grep 'logstash' | awk '{print $2}').
With this command, my script will be terminated as well.
it looks like your script name includes "logstash".
As a consequence, PID is filled with 2 values, and the kill command kills your script as well.
Rename your script without "logstash" in the name should fix the issue.
This should correct your issue :
PID=$( ps -ef | grep -E '[ ]logstash[ ]' | grep -v "grep" | head -1 | awk '{print $2}')
echo $PID
kill -9 $PID
echo "logstash process is stopped"
rm /home/user/test.csv
echo "test.csv is deleted."
rm /home/example.txt
echo "example.txt is deleted."
Regards!

Multiple PIDs being stored in PID file

I have a System V init script I've developed that starts a Java program. For some reason whenever the PID file gets created, it contains multiple PIDs instead of one.
Here's the relevant code that starts the service and writes to the PID file:
daemon --pidfile=$pidfile "$JAVA_CMD &" >> $logfile 2>&1
RETVAL=$?
usleep 500000
if [ $RETVAL -eq 0 ]; then
touch "$lock"
PID=$(ps aux | grep -vE 'grep|runuser|bash' | grep <myservice> | awk '{print $2}')
echo $PID > $pidfile
When I test the ps aux... command manually, a single line returns. When running as a script, it appears that this call is returning multiple PIDs.
Example contents in the PID file: 16601 16602 16609 16619 16690. 16619 is the actual process ID found when manually running the ps aux... command mentioned above.
Try reversing your greps. The first one (-vE) may run BEFORE the myservice one starts up. Grep for your service FIRST, then filter out the unwanted lines:
PID=$(ps aux | grep <myservice> | grep -vE 'grep|runuser|bash' | awk '{print $2}')
I encounted the same issue but not the same statement, it was like this:
PID="$(ps -ef|grep command|grep options|grep -v grep|awk '{print $2}')"
in which I used the same grep order as #Marc said in first answer, but did not filter all the unwanted lines.
So I tried the below one and it worked:
PID="$(ps -ef|grep command|grep options|grep -vE 'grep|runuser|bash'|awk '{print $2}')"

echo $variable in cron not working

Im having trouble printing the result of the following when run by a cron. I have a script name under /usr/local/bin/test
#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ARAW=`date +%y%m%d`
NAME=`hostname`
TODAY=`date '+%D %r'`
cd /directory/bar/foo/
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
echo "Resolve2 Backup" > /home/user/result.txt
echo " " >> /home/user/result.txt
echo "$VARR" >> /home/user/result.txt
mail -s "Result $TODAY" email#email.com < /home/user/result.txt
I configured it in /etc/cron.d/test to run every 1am:
00 1 * * * root /usr/local/bin/test
When Im running it manually in command line
# /usr/local/bin/test
Im getting the complete value. But when I let cron do the work, it never display the part of echo "$VARR" >> /home/user/result.txt
Any ideas?
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
ls -ltr /path/to/dir will not include the directory in the filename part of the output. Then, you call ls again with this output, and this will look in your current directory, not in /path/to/dir.
In cron, your current directory is likely to be /, and in your manual testing, I bet your current directory is /path/to/dir
Here's another approach to finding the newest file in a directory that emits the full path name:
stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-
Requires GNU stat, check your man page for the correct invocation for your system.
I think your VARR invocation can be:
latest_dir=$(stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-)
interesting_files=$(ls -ltr "$latest_dir"/*something*)
Then, no need for a temp file:
{
echo "Resolve2 Backup"
echo
echo "$interesting_files"
} |
mail -s "Result $TODAY" email#email.com
Thanks for all your tips and response. I solved my problem. The problem is the ouput of $8 and $9 in cron. I dont know what special field being read while it is being run in cron. Im just a newbie in scripting so sorry for my bad script =)

Why linux redirect loss info?

I write a script like this:
#!/bin/bash
LOG_PATH=/root/cngiqos-log
LOG_NAME=term.log
TERM_PATH=/home/bnrcqos/qos_M11/term
test -d $LOG_PATH || mkdir -p $LOG_PATH
routeID='M11'
if [ `ps -ef | grep 'term$' | grep -v grep | wc -l` -gt 0 ]; then
echo $routeID' term process is already running'
else
cd $TERM_PATH
(nohup ./term > $LOG_PATH/$LOG_NAME 2>&1 &)
fi
And I input "tail -f /root/cngiqos-log/term.log" and see the log, the log loss info, the log only output part of a log and then don't output any more. But when I input "./term" and run it in fg, the output is fine.
Does any body know why? Is it a system bug?
Maybe you just get what you asked for? tail just gives the last 10 lines by default.

Resources