bash script not running as expected from cron vs. shell. - linux

I have a script I got from this website and modified to suit my needs. Orginal post: Linux Script to check if process is running & act on the result
When running the script from cron, it always creates a new process. If I run it from shell it runs normally. Can anyone help me debug the issue ?
Script
[root#server2 ~]# cat /root/full-migrate.sh
#!/bin/bash
case "$(pidof perl | wc -w)" in
0) echo "Restarting iu-maildirtoimap: $(date)" >> /var/log/iu-maildirtoimap.txt
/usr/local/bin/iu-maildirtoimap -i currentuser.txt -D imap.gmail.com:993 -d -n7 -I&
;;
1) echo "Everything seems okay: $(date)" >> /var/log/iu-maildirtoimap.txt
;;
*) echo "Removed double iu-maildirtoimap: $(date)" >> /var/log/iu-maildirtoimap.txt
kill -9 $(pidof perl | awk '{print $1}')
;;
esac
crontab job
[root#server2 ~]# crontab -l
*/1 * * * * /bin/bash /root/full-migrate.sh
From the logfile:
Removed double iu-maildirtoimap: Tue Dec 30 02:32:37 GMT 2014
Removed double iu-maildirtoimap: Tue Dec 30 02:32:38 GMT 2014
Removed double iu-maildirtoimap: Tue Dec 30 02:32:39 GMT 2014
Everything seems okay: Tue Dec 30 02:32:39 GMT 2014
Restarting iu-maildirtoimap: Tue Dec 30 02:33:01 GMT 2014
Restarting iu-maildirtoimap: Tue Dec 30 02:34:01 GMT 2014
Restarting iu-maildirtoimap: Tue Dec 30 02:35:01 GMT 2014
The first 4 entries are me manually running "/bin/bash /root/full-migrate.sh"
The last 3 are from the crontab.
Any suggestions on how to debug this issue ?
At the time of writing:
[root#server2 ~]# $(pidof perl | wc -w)
bash: 13: command not found
[root#server2 ~]# $(pidof perl | awk '{print $1}')
bash: 26370: command not found

Your test from the command line is not valid, because you are basically executing the process id, which will give you a command not found.
From the command line you will need to test this way:
$ pidof perl | wc -l
without the $()
The issue you are most likely having is that cron cannot find pidof in the path. So you will need to figure out where pidof resides on your system:
$ which pidof
and then put that full path in your cron job and it should work.

Related

Linux Script to remove lines that match dates

I have a log file that includes lines that are formatted like the following below. I am trying to create a script in Linux that will remove the lines older then x days from the current date.
Wed Jan 26 10:44:35 2022 : Auth: (72448) Login incorrect (mschap: MS-CHAP2-Response is incorrect): [martin.zeus] (from client CoreNetwork port 0 via TLS tunnel)
Wed Jan 16 10:45:32 2022 : Auth: (72482) Login OK: [george.kye] (from client CoreNetwork port 5 cli CA-93-F0-6C-7E-77)
I think you should take a look at logrotate and Kibana & Elastic search to parse and filter the logs.
Nevertheless, I made a simple script that prints only the entries from the day that you pass as an argument until the current date,
E.g. This will print only the logs since the last 5 days. bash filter.sh log.txt 5
#!/usr/bin/env bash
file="${1}"
days="${2:-1}"
epoch_days=$(date -d "now -${days} days" +%s)
OFS=$IFS
IFS=$'\n'
while read line; do
epoch_log=$(date --date="$(echo $line | cut -d':' -f1,2,3)" +%s)
if [ ${epoch_log} -ge ${epoch_days} ]; then
echo ${line}
fi
done < ${file}
IFS=$OFS

Creating script to report system suspend or awake is not running?

Here is the code that should output to a file when system goes into SUSPEND or AWAKES:
(this code is in /etc/pm/sleep.d)
(also had to make the file executable: sudo chmod +x sleep_mode)
(when running from the command line, the "suspend script" is written to the file.
(but when I suspend the computer or awaken the computer... nothing is written to file.)
(16.04 LTS)
#!/bin/bash
# general entry
echo "suspend script"
echo "%suspend script" >> /tmp/suspend_time.txt
date +%s >> /tmp/suspend_time.txt
case "$1" in
suspend)
# executed on suspend
echo "%system_suspend" >> /tmp/suspend_time.txt
date +%s >> /tmp/suspend_time.txt
;;
resume)
# executed on resume
echo "%system_resume" >> /tmp/suspend_time.txt
date +%s >> /tmp/suspend_time.txt
;;
*)
;;
esac
You don't say what distribution you are running, but if you are running the systemd daemon, try putting it in /lib/systemd/system-sleep instead (note that the arguments are different).
Script:
[eje#irenaeus ~]$ sudo cat /lib/systemd/system-sleep/95test
[sudo] password for eje:
#!/bin/bash
echo $(date) "$#" >> /tmp/args
Output:
Sun Dec 20 02:45:51 PM EST 2020 pre suspend
Sun Dec 20 02:45:59 PM EST 2020 post suspend

Bash script issue checking command output

I have a bash script that I want to check the output a Linux command
The command is: sudo supervisorctl status
The normal output looks like this:
0: tuxtunnel RUNNING pid 563, uptime 11 days, 5:04:19
1: util_pkt_logger STOPPED Oct 11 01:20 PM
2: watchdog EXITED Oct 11 12:03 PM
My first attempt of the bash script reads this output from the command and puts each line into an array, unfortunately when I go to check to see if a string is contained in this result, it seems to try and execute the check as a command. My script looks like this
echo "its stopped"
x=$(sudo supervisorctl status)
SAVEIFS=$IFS
IFS=$'\n'
x=(${x})
IFS=$SAVEIFS
for(( i=0; i<${#x[#]}; i++ ))
do
echo "$i: ${x[$i]}"
if [$x[$i]] =~"STOPPED" #check if array contains this string
then
echo "its stopped"
fi
done
exit 0
When I try to perform the check that is when things go haywire, I am new to bash scripts, so any help would be appreciated. I am trying to see if the line contains the word STOPPED
Rather than reading the entire output of supervisorctl into a single variable and then manipulating the IFS variable to break the lines up, try reading one line at a time. Also, instead of matching STOPPED anywhere on the line, only look for it in the status column.
Try this:
#!/bin/bash
while read line; do
echo ${line}
fields=( ${line} )
if [ ${fields[2]} == "STOPPED" ]; then
echo "It's stopped."
fi
done < $(sudo supervisorctl status)
You do not need to go for a while loop. You can use awk to solve this.
#!/bin/bash
sudo supervisorctl status | awk '{if ($6 == "STOPPED") print $2" is Stopped";}'
gawk utility we can use .Please refer URL
https://unix.stackexchange.com/questions/94047/shell-script-to-print-rows-if-there-is-a-value-in-column-2
cat testout
0: tuxtunnel RUNNING pid 563, uptime 11 days, 5:04:19
1: util_pkt_logger STOPPED Oct 11 01:20 PM
2: watchdog EXITED Oct 11 12:03 PM
3: TEST_log STOPPED OCT 1 11:11AM
gawk '$3=="STOPPED" {print $0}' testout
1: util_pkt_logger STOPPED Oct 11 01:20 PM
3: TEST_log STOPPED OCT 1 11:11AM
This was a brackets issue. This fixed it:
if [[ "${x[$i]}" =~ "STOPPED" ]] thanks to Gordon Davisson

Reading file in bash loop on AIX/bash is much slower than in Linux/ksh - BLOCKSIZE?

We have a custom script ( in ksh) which was developed in RHEL Linux.
The functionality is
1) Read the input ASCII file
2) Replace "\" with "\\" using sed -i inplace the files
3) Load the history file into memory
4) Compare the data with current day
5) Generate the net change records
During a platform upgrade, we had to migrate this script on AIX 7.1 and
replaced the ksh with bash since, typeset -A is not available on ksh AIX and sed -i command with perl -pi -e and the rest of the script is almost the same.
We observe that the script processes for 1 hour ( 691 files) in Linux but, in AIX it is taking 7+ hours for the same.
We observe for one input file the below snippet is having a performance difference, Linux code completes within 1-2 seconds whereas, in AIX it takes 13-15 seconds. Due to this performance difference for each file , for 691 files, the script is taking 7 hours to complete.
Could you please help me understand if we can tune this script for a better performance on AIX. Any pointers will be very helpful.
Thank you in advance for your help!
Adding test results below for more precise issue
Linux Test script:
#!/bin/sh
export LANG="C"
echo `date`
typeset -A Archive_Lines
if [ -f "8249cii1.ASC" ]
then
echo `date` Starting sed
sed -i 's/\\/\\\\/g' 1577cii1.ASC
echo `date` Ending sed
while read line; do
if [[ "${#line}" == "401" ]]
then
Archive_Lines["${line:0:19}""${line:27}"]="${line:27:10}"
else
echo ${#line}
fi
done < 1577cii1.ASC
echo `date` Starting sed
sed -i 's/\\\\/\\/g' 1577cii1.ASC
echo `date` Ending sed
fi
echo `date`
Linux execution:
ksh read4.sh
Sun Nov 12 15:03:18 CST 2017
Sun Nov 12 15:03:18 CST 2017 Starting sed
Sun Nov 12 15:03:19 CST 2017 Ending sed
402
405
403
339
403
403
Sun Nov 12 15:03:22 CST 2017 Starting sed
Sun Nov 12 15:03:23 CST 2017 Ending sed
Sun Nov 12 15:03:23 CST 2017
AIX Test Script:
#!/usr/bin/bash
export LANG="C"
echo `date`
typeset -A Archive_Lines
if [ -f "1577cii1.ASC" ]
then
echo `date` Starting perl
perl -pi -e 's/\\/\\\\/g' 1577cii1.ASC
echo `date` Ending perl
while read line; do
if [[ "${#line}" == "401" ]]
then
Archive_Lines["${line:0:19}""${line:27}"]="${line:27:10}"
else
echo ${#line}
fi
done < 1577cii1.ASC
echo `date` Starting perl
perl -pi -e 's/\\\\/\\/g' 1577cii1.ASC
echo `date` Ending perl
fi
echo `date`
AIX Test execution:
bash read_test.sh
Sun Nov 12 15:00:17 CST 2017
Sun Nov 12 15:00:17 CST 2017 Starting perl
Sun Nov 12 15:00:18 CST 2017 Ending perl
402
405
313
403
337
403
403
Sun Nov 12 15:01:29 CST 2017 Starting perl
Sun Nov 12 15:01:29 CST 2017 Ending perl
Sun Nov 12 15:01:29 CST 2017
Replacing Archive_Lines["${line:0:19}""${line:27}"]="${line:27:10}" with echo"."
bash read_test.sh
Sun Nov 12 16:56:27 CST 2017
Sun Nov 12 16:56:27 CST 2017 Starting perl
Sun Nov 12 16:56:27 CST 2017 Ending perl
.
.
.
.
.
Sun Nov 12 16:56:42 CST 2017 Starting perl
Sun Nov 12 16:56:42 CST 2017 Ending perl
Sun Nov 12 16:56:42 CST 2017
With Archive_Lines["${line:0:19}""${line:27}"]="${line:27:10}"
bash read_test.sh
Sun Nov 12 16:59:52 CST 2017
Sun Nov 12 16:59:52 CST 2017 Starting perl
Sun Nov 12 16:59:52 CST 2017 Ending perl
402
405
313
403
337
403
403
Sun Nov 12 17:01:11 CST 2017 Starting perl
Sun Nov 12 17:01:11 CST 2017 Ending perl
Sun Nov 12 17:01:11 CST 2017
Thanks,
Vamsi
As Walter had suggested, it looks like there are some performance hits in bash for the substring processing (and possibly the length test).
It might be of interest to see what kind of timings you get with other solutions.
Here's a simplistic awk solution that should do the same thing as the original bash/substring logic (using your current sample data file; sans the output of line lengths != 401):
awk 'length($0)==401 { print substr($0,1,20)substr($0,28)"|"substr($0,28,10) }' 1577cii1.ASC | \
while IFS="|" read idx val
do
Archive_Lines["${idx}"]="${val}"
done
length($0)==401 : if line length is 401 then ...
print ...."|" ... : print 2 sections of output/fields separated by a pipe (|), where the fields are ...
substr($0,1,20)substr($0,28) : equivalent to your ${line:0:19}${line:27}
substr($0,28,10) : equivalent to your ${line:27:10}
at this point every line of length 401 is generating output like string1|string2
while IFS="|" read idx val : split the input back out into 2 variables ...
Archive_Lines["${idx}"]="${val}" : use the 2 variables as the array index/value pairs
NOTE: The addition of the pipe (|) as a field separator was added in case your substrings could include spaces; and of course if your substrings could include the pipe (|) then replace with some other character that won't show up in your substrings and which you can use as a field delimiter.
The objective is to see if awk's built-in length/substring processing is faster than bash's length/substring processing ...
This solved my problem
#!/usr/bin/ksh93
export LANG="C"
echo `date`
typeset -A Archive_Lines
if [ -f "1577cii1.ASC" ]
then
echo `date` Starting perl
perl -pi -e 's/\\/\\\\/g' 1577cii1.ASC
echo `date` Ending perl
while read line; do
if [[ "${#line}" == "401" ]]
then
Archive_Lines[${line:0:19}${line:27}]="${line:27:10}"
else
echo ${#line}
fi
done < 1577cii1.ASC
echo `date` Starting perl
perl -pi -e 's/\\\\/\\/g' 1577cii1.ASC
echo `date` Ending perl
fi
echo `date`
ksh93 read_test3.sh
Sun Nov 12 19:19:34 CST 2017
Sun Nov 12 19:19:34 CST 2017 Starting perl
Sun Nov 12 19:19:34 CST 2017 Ending perl
402
405
403
339
403
403
Sun Nov 12 19:19:38 CST 2017 Starting perl
Sun Nov 12 19:19:39 CST 2017 Ending perl
Sun Nov 12 19:19:39 CST 2017

Redirect stderr with date to log file from Cron

A bash script is run from cron, stderr is redirected to a logfile, this all works fine.
The code is:
*/10 5-22 * * * /opt/scripts/sql_fetch 2>> /opt/scripts/logfile.txt
I want to prepend the date to every line in the log file, this does not work, the code is:
*/10 5-22 * * * /opt/scripts/sql_fetch 2>> ( /opt/scripts/predate.sh >> /opt/scripts/logfile.txt )
The predate.sh script looks as follows:
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
So the second bit of code doesn't work, could someone shed some light?
Thanks.
I have a small script cronlog.sh to do this. The script code
#!/bin/sh
echo "[`date`] Start executing $1"
$# 2>&1 | sed -e "s/\(.*\)/[`date`] \1/"
echo "[`date`] End executing $1"
Then you could do
cronlog.sh /opt/scripts/sql_fetch >> your_log_file
Example result
cronlog.sh echo 'hello world!'
[Mon Aug 22 04:46:03 CDT 2011] Start executing echo
[Mon Aug 22 04:46:03 CDT 2011] helloworld!
[Mon Aug 22 04:46:03 CDT 2011] End executing echo
*/10 5-22 * * * (/opt/scripts/predate.sh; /opt/scripts/sql_fetch 2>&1) >> /opt/scripts/logfile.txt
should be exactly your way.

Resources