Cronjob fails when script is located in other folder - linux

I have a bash script that has a sentence like this:
readarray users < <(cat /etc/passwd | grep 'aText' | awk -F':' '{print $1}');
When I run it in my home folder, it works. Same if I run it with sudo. Same success when I run it as a crontab job adding it with sudo crontab -e and defined like this: /15 * * * * /home/my-home/myscript.sh > /home/my-home/myscript.log 2>&1
When I move the script to the folder /opt/my-org/my-app/utils/myscript.sh I can still run it, however when I update the cronjob with sudo crontab -e to /15 * * * * /opt/my-org/my-app/utils/myscript.sh > /opt/my-org/my-app/utils/myscript.log 2>&1 I get the next error in the log file:
/opt/my-org/my-app/utils/myscript.sh: line 6: syntax error near unexpected token `<'
/opt/my-org/my-app/utils/myscript.sh: line 6: `readarray users < <(cat /etc/passwd | grep 'aText' | awk -F':' '{print $1}');'
I am using RHEL 7.9
Why is this happening?
My script has the #!/bin/bash line, also bash version is 4.3
Also, I noticed this:
$ readarray users < <(cat /etc/passwd | grep 'aText' | awk -F':' '{print $1}');
$ echo $?
0
$ sudo readarray users < <(cat /etc/passwd | grep 'aText' | awk -F':' '{print $1}');
sudo: readarray: command not found
$ echo $?
1
I am expecting the crontab job to run successfully independently where is it located .

Related

Using ssh inside a script to run another script that itself calls ssh

I'm trying to write a script that builds a list of nodes then ssh into the first node of that list
and runs a checknodes.sh script which it's self is just a for i loop that calls checknode.sh
The first 2 lines seems to work ok, the list builds successfully, but then I get either get just the echo line of checknodes.sh to print out or an error saying cat: gpcnodes.txt: No such file or directory
MYSCRIPT.sh:
#gets the master node for the job
MASTERNODE=`qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' | head -n 1`
#builds list of nodes in job
ssh -qt $MASTERNODE "qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' > /users/issues/slow_job_starts/gpcnodes.txt"
ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/
ssh -qt $MASTERNODE /users/issues/slow_job_starts/checknodes.sh
checknodes.sh
for i in `cat gpcnodes.txt `
do
echo "### $i ###"
ssh -qt $i /users/issues/slow_job_starts/checknode.sh
done
checknode.sh
str=`hostname`
cd /tmp
time perf record qhost >/dev/null 2>&1 | sed -e 's/^/${str}/'
perf report --pretty=raw | grep % | head -20 | grep -c kernel.kallsyms | sed -e "s/^/`hostname`:/"
When ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/ is finished, the changed directory is lost.
With the backquotes replaced by $(..) (not an error here, but get used to it), the script would be something like
for i in $(cat /users/issues/slow_job_starts/gpcnodes.txt)
do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done
or better
while read -r i; do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done < /users/issues/slow_job_starts/gpcnodes.txt
Perhaps you would also like to change your last script (start with cd /users/issues/slow_job_starts)
You will find more problems, like sed -e 's/^/${str}/' (the ${str} inside single quotes won't be replaced by a host), but this should get you started.
EDIT:
I added option -n to the ssh call.
Redirects stdin from /dev/null (actually, prevents reading from stdin).
Without this option only one node is checked.

Need to run ksh script in windows korn shell

I am new to korn shell, I am trying to run ksh script that to kill all 3 days older process in my server, that works good for direct input, but when I put this in a for look script I got error, someone please help.
FYI, korn shell is installed in windows server.
> cat test.ksh
#! /usr/bin/ksh
for i in {ps -eo etime,pid,args | awk -F- '$1>3{print}' | grep -i read_ini | awk '{print $2}'}
do
kill -9 $i
done
LCQU#SETOPLCORA01Q [/dev/fs/E/home/serora]
> ./test.ksh
./test.ksh[3]: syntax error: `|' unexpected
LCQU#SETOPLCORA01Q [/dev/fs/E/home/serora]
> ksh test.ksh
test.ksh[3]: syntax error: `|' unexpected
LCQU#SETOPLCORA01Q [/dev/fs/E/home/serora]
> ls -l test.ksh
-rwxrwx--- 1 jagadee Domain Users 133 Dec 24 13:16 test.ksh
Do not use {} but $() for a subprocess:
for i in $(ps -eo etime,pid,args | awk -F- '$1>3{print}' | grep -i read_ini | awk '{print $2}')
do
kill -9 $i
done

echo $variable in cron not working

Im having trouble printing the result of the following when run by a cron. I have a script name under /usr/local/bin/test
#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ARAW=`date +%y%m%d`
NAME=`hostname`
TODAY=`date '+%D %r'`
cd /directory/bar/foo/
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
echo "Resolve2 Backup" > /home/user/result.txt
echo " " >> /home/user/result.txt
echo "$VARR" >> /home/user/result.txt
mail -s "Result $TODAY" email#email.com < /home/user/result.txt
I configured it in /etc/cron.d/test to run every 1am:
00 1 * * * root /usr/local/bin/test
When Im running it manually in command line
# /usr/local/bin/test
Im getting the complete value. But when I let cron do the work, it never display the part of echo "$VARR" >> /home/user/result.txt
Any ideas?
VARR=$(ls -lrt /directory/bar/foo/ | tail -1 | awk {'print $8'} | ls -lrt `xargs` | grep something)
ls -ltr /path/to/dir will not include the directory in the filename part of the output. Then, you call ls again with this output, and this will look in your current directory, not in /path/to/dir.
In cron, your current directory is likely to be /, and in your manual testing, I bet your current directory is /path/to/dir
Here's another approach to finding the newest file in a directory that emits the full path name:
stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-
Requires GNU stat, check your man page for the correct invocation for your system.
I think your VARR invocation can be:
latest_dir=$(stat -c '%Y %n' /path/to/dir/* | sort -nr | head -1 | cut -d" " -f 2-)
interesting_files=$(ls -ltr "$latest_dir"/*something*)
Then, no need for a temp file:
{
echo "Resolve2 Backup"
echo
echo "$interesting_files"
} |
mail -s "Result $TODAY" email#email.com
Thanks for all your tips and response. I solved my problem. The problem is the ouput of $8 and $9 in cron. I dont know what special field being read while it is being run in cron. Im just a newbie in scripting so sorry for my bad script =)

Why part of the script cannot execute in the crontab

I have a script stopping the application and zipping some files:
/home/myname/project/stopWithZip.sh
With the properties below:
-rwxrwxr-x. 1 myname myname778 Jun 25 13:48 stopWithZip.sh
Here is the content of the script:
ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15
month=`date +%m`
year=`date +%Y`
fixLogs=~/project/log/fix/$year$month/*.log.*
errorLogs=~/project/log/error/$year$month/log.*
for log in $fixLogs
do
if [ ! -f "$log.gz" ];
then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived fix log files done"
for log in $errorLogs
do
if [ ! -f "$log.gz" ]; then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived errorlog files done"
The problem is except this ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15 command, other gzip commands are not executed. I totally don't understand why.
I cannot see any compression of the logs in the directory.
BTW, when I execute the stopWithZip.sh explicitly in command line, it works perfectly fine.
In crontab:
00 05 * * 2-6 /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1 (NOT work)
In command line:
/home/myname/project>./stopWithZip.sh (work)
Please help
The script fails when run under cron because your script is invoked with project in its path, so the kill pipeline kills the script too.
You could prove (or disprove) this by adding some tracing. Log the output of ps and of awk to log files:
ps -ef |
tee /tmp/ps.log.$$ |
grep project |
grep -v grep |
awk '{print $2}' |
tee /tmp/awk.log.$$ |
xargs kill -15
Review the logs and see that your script is one of the processes being killed.
The crontab entry contains:
/home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
When ps lists that, it contains 'project' and does not contain 'grep' so the kill in the script kills the script itself.
When you run it from the command line (using a conventional '$' as the prompt), you run:
$ ./stopWithZip.sh
and when ps lists that, it does not contain 'project' so it is not killed.
If you ran:
$ /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
from the command line, like you do with cron (crontab), you would find it fails.

working fine in command prompt but not in shell script

I have to take the field count from particular line in a zip file.
when I queried on command prompt in Linux it gives me output.
gunzip -c file | grep 'good' | awk -F' ' '{prinf NF}'
when execute this query on command line it gives a output 10 which is correct.
when I assigned this to a variable in shell script and execute .sh it gives me error
cat > find.sh
cnt=`gunzip -c file | grep 'good' | awk -F' ' '{print NF}'`
echo $cnt
./ sh find.sh
find.sh: 2: find sh: 10: not found
Please help out in this..!!
Try this:
cat find.sh
#!/bin/bash
cnt=$(gunzip -c file | awk '/good/ {prinf NF}')
echo $cnt
./find.sh
10

Resources