Shell - Redirected namepiped tmp log file of nohup out writing junk logs - linux

I am trying to create a logrotate script for a running nohup output ( without any line breakages - tried logrotate package in system and observed several log lines are getting missed while rotating the continuously generating log file). Here's the steps i followed,
Run the below script in background
#!/bin/bash
log_split_pipe="/tmp/log_split_pipe"
log_rename_interval_in_sec=60
log_file="/home/application/Logs/appname.log"
semaphore="/home/application/Logs/appname.log.pause"
write_log()
{
while read line
do
while [ -f $semaphore ]
do
sleep 1
done
echo "$line" >> $log_file
done < $log_split_pipe
}
write_log &
log_start_time=$(date +%s)
while true
do
tim_diff=$(expr $(date +%s) - $log_start_time)
if [ $tim_diff -ge $log_rename_interval_in_sec ];then
touch $semaphore
mv /home/application/Logs/appname.log /home/application/Logs/appname-$(date +%Y'_'%m'_'%d'_'%H'_'%M).log
rm -rf $semaphore
log_start_time=$(date +%s)
fi
done
mkfifo /tmp/log_split_pipe
run application as
nohup ./application 2>&1 1 >/tmp/log_split_pipe &
Here the problem is, to the log file /home/application/Logs/appname.log I am getting junk texts instead of the proper logs written by the process.
Can anyone help on the problem with the logic and to rectify?

Related

Use of echo >> produces inconsistent results

I've been trying to understand a problem that's cropped up with some of the scripts we use at work.
To generate many of our script logs, we utilize the exec command and file redirects to print all output from the script to both the terminal and a log file. Occasionally, for information that doesn't need to be displayed to the user, we do a straight redirect to the log file.
The issue we're seeing occurs on the last line of output to the file when we're printing the number of errors that occurred during that execution: The text doesn't get printed to the file.
In an attempt to diagnose the problem, I wrote a simplified version of our production script (script1.bash) and a test script (script2.bash) to try to tease out the problem.
script1.bash
#!/bin/bash
log_name="${USER}_`date +"%Y%m%d-%H%M%S"`_${HOST}_${1}.log"
log="/tmp/${log_name}"
log_tmp="/tmp/temp_logs"
err_count=0
finish()
{
local ecode=0
if [ $# -eq 1 ]; then
ecode=${1}
fi
# This is the problem line
echo "Error Count: ${err_count}" >> "${log}"
mvlog
local success=$?
exec 1>&3 2>&4
if [ ${success} -ne 0 ]; then
echo ""
echo "WARNING: Failed to save log file to ${log_tmp}"
echo ""
ecode=$((ecode+1))
fi
exit ${ecode}
}
mvlog()
{
local ecode=1
if [ ! -d "${log_tmp}" ]; then
mkdir -p "${log_tmp}"
chmod 775 "${log_tmp}"
fi
if [ -d "${log_tmp}" ]; then
rsync -pt --bwlimit=4096 "${log}" "${log_tmp}/${log_name}" 2> /dev/null
[ $? -eq 0 ] && ecode=0
if [ ${ecode} -eq 0 ]; then
rm -f "${log}"
fi
fi
}
exec 3>&1 4>&2 >(tee "${log}") 2>&1
ecode=0
echo
echo "Some text"
echo
finish ${ecode}
script2.bash
#!/bin/bash
runs=10000
logdir="/tmp/temp_logs"
if [ -d "${logdir}" ]; then
rm -rf "${logdir}"
fi
for i in $(seq 1 ${runs}); do
echo "Conducting run #${i}/${runs}..."
${HOME}/bin/script1.bash ${i}
done
echo "Scanning logs from runs..."
total_count=`find "${logdir}" -type f -name "*.log*" | wc -l`
missing_count=`grep -L 'Error Count:' ${logdir}/*.log* | grep -c /`
echo "Number of runs performed: ${runs}"
echo "Number of log files generated: ${total_count}"
echo "Number of log files missing text: ${missing_count}"
My first test indicated roughly 1% of the time the line isn't written to the log file. I then proceeded to try several different methods of handling this line of output.
Echo and Wait
echo "Error Count: ${err_count}" >> "${log}"
wait
Alternate print method
printf "Error Count: %d\n" ${err_count} >> "${log}"
No Explicit File Redirection
echo "Error Count: ${err_count}"
Echo and Sleep
echo "Error Count: ${err_count}" >> "${log}"
sleep 0.2
Of these, #1 and #2 each had a 1% fail rate while #4 had a staggering 99% fail rate. #3 was the only methodology that had a 0% fail rate.
At this point, I'm at a loss for why this behavior is occurring, so I'm asking the gurus here for any insight.
(Note that the simple solution is to implement #3, but I want to know why this is happening.)
Without testing, this looks like a race condition between your script and tee. It's generally better to avoid multiple programs writing to the same file at the same time.
If you do insist on having multiple writers, make sure they are all in append mode, in this case by using tee -a. Appends to the local filesystem are atomic, so all writes should make it (this is not necessarily true for networked file systems).

Linux Script to check if process is running and restart if not

I am having this script which looks for the process filebeat and restarts it if is not running. Cron runs this script every 5 minutes. Most of the time this works fine except sometime it creates multiple filebeat process. Can someone please point out what is the issue in my script.
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
service=filebeat
servicex=/usr/share/filebeat/bin/filebeat
pid=`pgrep -x "filebeat"`
if [ $pid > /dev/null ]
then
echo "$(date) $service is running!!!"
else
echo "$(date) starting $service"
cd /home/hpov/beats/filebeat
./filebeat -c filebeat.yml &
fi
#!/bin/bash
pidof script.x86 >/dev/null
if [[ $? -ne 0 ]] ; then
echo "Restarting script: $(date)" >> /var/log/script.txt
/etc/script/script.x86 &
fi
Super easy :D

Background rsync and pid from a shell script

I have a shell script that does a backup. I set this script in a cron but the problem is that the backup is heavy so it is possible to execute a second rsync before the first ends up.
I thought to launch rsync in a script and then get PID and write a file that script checks if the process exist or not (if this file exist or not).
If I put rsync in background I get the PID but I don't know how to know when rsync ends up but, if I set rsync (no background) I can't get PID before the process finish so I can't write a file whit PID.
I don't know what is the best way to "have rsync control" and know when it finish.
My script
#!/bin/bash
pidfile="/home/${USER}/.rsync_repository"
if [ -f $pidfile ];
then
echo "PID file exists " $(date +"%Y-%m-%d %H:%M:%S")
else
rsync -zrt --delete-before /repository/ /mnt/backup/repositorio/ < /dev/null &
echo $$ > $pidfile
# If I uncomment this 'rm' and rsync is running in background, the file is deleted so I can't "control" when rsync finish
# rm $pidfile
fi
Can anybody help me?!
Thanks in advance !! :)
# check to make sure script isn't still running
# if it's still running then exit this script
sScriptName="$(basename $0)"
if [ $(pidof -x ${sScriptName}| wc -w) -gt 2 ]; then
exit
fi
pidof finds the pid of a process
-x tells it to look for scripts too
${sScriptName} is just the name of the script...you can hardcode this
wc -w returns the word count by words
-gt 2 no more than one instance running (instance plus 1 for the pidof check)
if more than one instance running then exit script
Let me know if this works for you.
Test both for presence of pid file and status of the running process like this:
#!/bin/bash
pidfile="/home/${USER}/.rsync_repository"
is_running =0
if [ -f $pidfile ];
then
echo "PID file exists " $(date +"%Y-%m-%d %H:%M:%S")
previous_pid=`cat $pidfile`
is_running=`ps -ef | grep $previous_pid | wc -l`
fi
if [ $is_running -gt 0 ];
then
echo "Previous process didn't quit yet"
else
rsync -zrt --delete-before /repository/ /mnt/backup/repositorio/ < /dev/null &
echo $$ > $pidfile
fi
Hope this helps!!!

Bash script to re-launch program in case of failure error

In linux (I use a Ubuntu), I run a (ruby) program that continually runs all day long. My job is to monitor to see if the program fails and if so, re-launch the program. This consists up simply hitting 'Up' for last command and 'Enter'. Simple enough.
There has to be a way to write a bash script to monitor my program if its stops working and to re-launch it automatically.
How would I go about doing this?
A bonus is to be able to save the output of the program when it errors.
What you could do:
#!/bin/bash
LOGFILE="some_file.log"
LAUNCH="your_program"
while :
do
echo "New launch at `date`" >> "${LOGFILE}"
${LAUNCH} >> "${LOGFILE}" 2>&1 &
wait
done
Another way is to periodicaly check the PID:
#!/bin/bash
LOGFILE="some_file.log"
LAUNCH="your_program"
PID=""
CHECK=""
while :
do
if [ -n "${PID}" ]; then
CHECK=`ps -o pid:1= -p "${PID}"`
fi
# If PID does not exist anymore, launch again
if [ -z "${CHECK}" ]; then
echo "New launch at `date`" >> "${LOGFILE}"
# Launch command and keep track of the PID
${LAUNCH} >> "${LOGFILE}" 2>&1 &
PID=$!
fi
sleep 2
done
Infinite loop:
while true; do
your_program >> /path/to/error.log 2>&1
done

after truncated log file in linux,the new created file was filled with many \0

Firstly,i will give the shell code:
#!/bin/bash
filename=$1
if [ -e $filename ] ; then
yesterday=`date -d yesterday +%Y%m%d`
cp $filename $filename.$yesterday
now=`date '+%Y-%m-%d%H:%M:%S'`
echo "========split log at $now========" > $filename
echo "========split log $filename to $filename.$yesterday at $now========"
else
echo "$filename not exist."
fi
The shell run successfully,and print the string ========split log at $now======== to the new created $filename.But below this string,many bytes of \0 are also written to the$filename,which is showed as follows:
My reputation score is less than 10,i can not post image,so i give the link of the picture:http://i.stack.imgur.com/QF0F2.jpg
i wrote the shell code aimed to truncate the log file created by nohup.
The original of my start command like this : nohup $cmd > $logPath 2>&1 &,
now i change it to nohup $cmd >> $logPath 2>&1 &.Someone told me that when use the mode of > the log writer program would remember the location of current log, and after truncating the log,the program will continue the location.

Resources