Keep getting error: "date --date=4 days ago: command not found" - linux

I setup a script on my dedicated server to backup all of my Cpanel backup files to Amazon S3, and I've got it running every night via cron.
It ran perfectly last night and backed everything up, but then proceeded to delete all it had backed up. It appears to have something to do with the date command because if I pull the "deletion" portion of the script out and put it into another file and run it as an echo, I can't get the date to echo properly. I keep getting a "command not found" error:
Here's the full code for the backup script:
#!/bin/bash
##Notification email address
_EMAIL=klawncare1212#gmail.com
ERRORLOG=/var/log/s3_backup_logs/backup.err`date +%F`
ACTIVITYLOG=/var/log/s3_backup_logs/activity.log`date +%F`
##Directory which needs to be backed up
SOURCE=/backup/cpbackup/daily
##Name of the backup in bucket
DESTINATION=`date +%F`
##Backup degree
DEGREE=4
#Clear the logs if the script is executed second time
:> ${ERRORLOG}
:> ${ACTIVITYLOG}
##Uploading the daily backup to Amazon s3
/usr/bin/s3cmd -r put ${SOURCE} s3://MK_Web_Server_Backup/${DESTINATION}/ 1>>${ACTIVITYLOG} 2>>${ERRORLOG}
ret2=$?
##Sent email alert
msg="BACKUP NOTIFICATION ALERT FROM `hostname`"
if [ $ret2 -eq 0 ];then
msg1="Amazon s3 DAILY Backup Uploaded Successfully"
else
msg1="Amazon s3 DAILY Backup Failed!!\n Check ${ERRORLOG} for more details"
fi
echo -e "$msg1"|mail -s "$msg" ${_EMAIL}
#######################
##Deleting backup's older than DEGREE days
## Delete from both server and amazon
#######################
DELETENAME=$(dateĀ --date="${DEGREE} days ago" +%F)
/usr/bin/s3cmd -r --force del s3://MK_Web_Server_Backup/${DELETENAME} 1>>${ACTIVITYLOG} 2>>${ERRORLOG}
And here is the sample code that I am testing to try and simply echo the date code above:
#!/bin/bash
##Backup degree
DEGREE=4
#######################
##Deleting backup's older than DEGREE days
## Delete from both server and amazon
#######################
DELETENAME=$(dateĀ --date="4 days ago" +%F)
echo ${DELETENAME}
What am I doing wrong? Every time I run this small test script on my CentOS linux server through SSH I get the following error:
"date --date=4 days ago: command not found"
So, it's not having any trouble inserting the "degree" variable value. And, if I simply take and run the same command at the prompt in SSH (date --date="4 days ago" +%F), it works like a charm, outputting the data just as I would expect it to.
What am I doing wrong?

You probably are picking up different versions of the date command when you run the script from a regular terminal, vs. running it from the script, because they use different paths. Either use the full path to the version of the date command you want to use, or explicitly set the path at the beginning of the script.

mydate4=`date -d "4 days ago " +%F`

Related

How to set and record alerts for Jenkin server down and up

I have Jenkins pipeline job which goes thought all our Jenkins servers and check the connectivity (runs every few minutes).
ksh file:
#!/bin/ksh
JENKINS_URL=$1
curl --connect-timeout 10 "$JENKINS_URL" >/dev/null
status=`echo $?`
if [ "$status" == "7" ]; then
export SUBJECT="Connection refused or can not connect to URL $JENKINS_URL"
echo "$SUBJECT"|/usr/sbin/sendmail -t XXXX#gmail.com
else
echo "successfully connected $JENKINS_URL"
fi
exit 0
I would like to add another piece of code, which record all the times that server was down (it should include the name of the server and timestamp) into a file, and in case the server is up again, send an email which will notify about it, and it will be also recorded in the file.
I don't want to get extra alerts, only one alert (to file and mail) when it's down, and one when it's up again. any idea how to implement it?
The detailed answer was given by unix.stackexchange community:
https://unix.stackexchange.com/questions/562594/how-to-set-and-record-alerts-for-jenkin-server-down-and-up

How to check last running time of any script in linux instance

I want to check if my scripts ran the last night(or last ran timestamp) on linux instance based on scripts crontab running time stamp.
So how to get scripts last ran time on linux instance?
I would suggest better record the start time during the start of the script and end time at the end of the Script.
# Start Time Entry
echo "Start : " $(date +%T) > exec.log
start=`date +%s`
CALL YOUR SCRIPT HERE
# End Time Entry
end=`date +%s`
echo "End : " $(date +%T) >> exec.log
# Get the Runtime
runtime=$((end-start))
echo "Runtime: $runtime" >> exec.log
If there is any better way, I am also curious to see and implement too.
grep cron from your "messages" or "syslog
grep -i cron /var/log/messages
or create a separate log file for cron from rsyslog, edit file /etc/rsyslog.conf and change #cron to cron. You will find logs in /var/log/cron

Rsync files across a dodgy network link - hangs instead of timeout

I am trying to keep 3 large directories (9G, 400G, 800G) in sync between our home site and another in a land far, far away across a network link that is a bit dodgy (slow and drops occasionally). Data was copied onto disks prior to installation so the rsync only needs to send updates.
The problem I'm having is the rsync hangs for hours on the client side.
The smaller 9G job completed, the 400G job has been in limbo for 15 hours - no output to the log file in that time, but has not timed out.
What I've done to setup for this (after reading many forum articles about rsync/rsync server/partial since I am not really a system admin)
I setup rsync server (/etc/rsyncd.conf) on our home system, entred it into xinetd and wrote a script to run rsync on the distant server, it loops if rsync fails in an attempt to deal with the dodgy network. The rsync command in the script looks like this:
rsync -avzAXP --append root#homesys01::tools /disk1/tools
Note the "-P" option is equivalent to "--progress --partial"
I can see in the log file that rsync did fail at one point and the loop restarted rsync, data was transferred after that based on entries in the log file, but the last update to the log file was 15 hours ago, and the rsync process on the client is still running.
CNT=0
while [ 1 ]
do
rsync -avzAXP --append root#homesys01::tools /disk1/tools
STATUS=$?
if [ $STATUS -eq 0 ] ; then
echo "Successful completion of tools rsync."
exit 0
else
CNT=`expr ${CNT} + 1`
echo " Rsync of tools failure. Status returned: ${STATUS}"
echo " Backing off and retrying(${CNT})..."
sleep 180
fi
done
So I expected these jobs to take a long time, I expected to see the occasional failure message in the log files (which I have) and to see rsync restart (which it has). Was not expecting rsync to just hang for 15 hours or more with no progress and no timeout error.
Is there a way to tell if rsync on the client is hung versus dealing with the dodgy network?
I set no timeout in the /etc/rsyncd.conf file. Should I and how do I determin a reasonable timeout setting?
I set rsync up to be available through xinetd, but don't always see the "rsync --daemon" process running. It restarts if I run rsync from the remote system. But shouldn't it be always running?
Any guidance or suggestions would be appreciated.
to tell the rsync client working status , with verbose option and keep a log file
change this line
rsync -avzAXP --append root#homesys01::tools /disk1/tools
to
rsync -avzAXP --append root#homesys01::tools /disk1/tools >>/tmp/rsync.log.`date +%F`
this would produce one log file per day under /tmp directory
then you can use tail -f command to trace the most recent log file ,
if it is rolling , it is working
see also
rsync - what means the f+++++++++ on rsync logs?
to understand more about the log
I thought I would post my final solution, in case it can help anyone else. I added --timeout 300 and --append-verify. The timeout eliminates the case of rsync getting hung indefinitely, the loop will restart it after the timeout. The append-verify is necessary to have it check any partial file it updated.
Note the following code is in a shell script and the output is redirected to a log file.
CNT=0
while [ 1 ]
do
rsync -avzAXP --append-verify --timeout 300 root#homesys01::tools /disk1/tools
STATUS=$?
if [ $STATUS -eq 0 ] ; then
echo "Successful completion of tools rsync."
exit 0
else
CNT=`expr ${CNT} + 1`
echo " Rsync of tools failure. Status returned: ${STATUS}"
echo " Backing off and retrying(${CNT})..."
sleep 180
fi
done

Cron job stop working after mount operation

I got a simple cron job which simply prints the current date to a log file. For testing purposes, I've done this cron job to occur every minute.
crontab -u user01 -e
* * * * * echo "Date is $(date)" >> /home/user01/date.log
It was used to work before I created a logical volume, give ext4 format to this logical volume and mount it to /home/user01. After the mount operation, it doesn't do anything.
After this, I create a crontab with just (crontab -e), which means I dont give the username , and the crontab started to work again. But I want to know why my first crontab not working after mount.
Also, I know the /home/date.log will be deleted after mount operation but the crontab should write an output to date.log every minute .
For the record, there isn't any problem with mounting. I check /etc/fstab, and df -hT. The /home/user01 directory is mounted.
Also I have tried exact same cron job for another user(user02) in another directory, and it worked so there isn't any syntax or privilige issue.
Also when I check the /var/log/cron, below output come every minute
(user01) CMD (echo "Today is $(date)" >> /home/user01/date.log)
(user02) CMD (echo "Today is $(date)" >> /home/user02/date.log)
This output comes to log file every minute so that I guess the crontab is working but not giving the output for user01 or something.
Thank you for your help
You can login user01 to execute echo "Date is $(date)" >> /home/user01/date.log. success?

How to have a script trigger in my script after it ssh's into a DC's Time clock server?

So, I have a script which it's intended purpose is to:
Ask for the DC number and Time clock number
log in to the Time clock server for the DC stated above
After log in, it is intended to run a seperate script inside my script which updates the time clock number also stated above.
My issue is that once I trigger the script, it logs into the server as intended, prompts me for my user ID, and then I have to press "enter" when "xterm" comes up after that. After this, the update script is supposed to run, however, it doesn't, and sits at the command line.
After I exit the server, THEN it runs the update script, but fails, because the update script doesn't exist in the jump box.
My question is, after the script logs in to the server, how can I get it to trigger the script inside the Time clock server, as I am wanting it to? Thanks.
Script is below:
#!/bin/bash -x
export LANG="C"
####
####
## This script is intended to speed up the process to setup timeclocks from DC tickets
## Created by Blake Smreker | b0s00dg | bsmreker#walmart.com
####
####
#Asks for DC number
echo "What is the four digit DC number?"
read DC #User input
#Asks for Timeclock number
echo "What is the two digit Timeclock number?"
read TMC #User input
#Defines naming convention of tna server
tnaserver="cs-tna.s0${DC}.us.wal-mart.com"
#creating variable to define the update script
tcupd="/u/applic/tna/shell/tc_software_update.sh tmc${TMC}.s0${DC}.us REFURBISHED"
#Logging in to the cs-tna package at the specified DC
/usr/bin/dzdo -u osedc /bin/ssh -qo PreferredAuthentications=publickey root#$tnaserver
echo "Preforming Timeclock update on Timeclock=$TMC, at DC=${DC}"
echo ""
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
$tcupd #Runs update script
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
echo ""
sleep 2
echo "If prompted to engage NOC due to Timeclock not being on the network, send the ticket to DC Networking"
echo ""
echo "OR"
echo ""
echo "If the script completed successfully, and the Timeclock was updated, you can now resolve the ticket"
You must run the command inside ssh session, not after it:
echo "Preforming Timeclock update on Timeclock=$TMC, at DC=${DC}"
echo ""
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
###### $tcupd #Runs update script
/usr/bin/dzdo -u osedc /bin/ssh -qo PreferredAuthentications=publickey root#$tnaserver /bin/bash -c /u/applic/tna/shell/tc_software_update.sh tmc${TMC}.s0${DC}.us REFURBISHED
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
echo ""
sleep 2
echo "If prompted to engage NOC due to Timeclock not being on the network, send the ticket to DC Networking"
echo ""
echo "OR"
echo ""
echo "If the script completed successfully, and the Timeclock was updated, you can now resolve the ticket"
From man ssh you see ssh [-46AaCfGgKkMNnqsTtVvXxYy] ....... destination [command]. If [command] is not given ssh runs remote login command scripts, for example xterm. You read more here or here or just browse google.
You need to think how and which environment variable you want to pass to the remote machine and remember about properly enclosing the variables, so they get expanded on your or the remote machine.

Resources