after truncated log file in linux,the new created file was filled with many \0 - linux

Firstly,i will give the shell code:
#!/bin/bash
filename=$1
if [ -e $filename ] ; then
yesterday=`date -d yesterday +%Y%m%d`
cp $filename $filename.$yesterday
now=`date '+%Y-%m-%d%H:%M:%S'`
echo "========split log at $now========" > $filename
echo "========split log $filename to $filename.$yesterday at $now========"
else
echo "$filename not exist."
fi
The shell run successfully,and print the string ========split log at $now======== to the new created $filename.But below this string,many bytes of \0 are also written to the$filename,which is showed as follows:
My reputation score is less than 10,i can not post image,so i give the link of the picture:http://i.stack.imgur.com/QF0F2.jpg

i wrote the shell code aimed to truncate the log file created by nohup.
The original of my start command like this : nohup $cmd > $logPath 2>&1 &,
now i change it to nohup $cmd >> $logPath 2>&1 &.Someone told me that when use the mode of > the log writer program would remember the location of current log, and after truncating the log,the program will continue the location.

Related

Shell - Redirected namepiped tmp log file of nohup out writing junk logs

I am trying to create a logrotate script for a running nohup output ( without any line breakages - tried logrotate package in system and observed several log lines are getting missed while rotating the continuously generating log file). Here's the steps i followed,
Run the below script in background
#!/bin/bash
log_split_pipe="/tmp/log_split_pipe"
log_rename_interval_in_sec=60
log_file="/home/application/Logs/appname.log"
semaphore="/home/application/Logs/appname.log.pause"
write_log()
{
while read line
do
while [ -f $semaphore ]
do
sleep 1
done
echo "$line" >> $log_file
done < $log_split_pipe
}
write_log &
log_start_time=$(date +%s)
while true
do
tim_diff=$(expr $(date +%s) - $log_start_time)
if [ $tim_diff -ge $log_rename_interval_in_sec ];then
touch $semaphore
mv /home/application/Logs/appname.log /home/application/Logs/appname-$(date +%Y'_'%m'_'%d'_'%H'_'%M).log
rm -rf $semaphore
log_start_time=$(date +%s)
fi
done
mkfifo /tmp/log_split_pipe
run application as
nohup ./application 2>&1 1 >/tmp/log_split_pipe &
Here the problem is, to the log file /home/application/Logs/appname.log I am getting junk texts instead of the proper logs written by the process.
Can anyone help on the problem with the logic and to rectify?

In Bash, how to not create the redirect output file once the command fails

Usually we may redirect a command output to a file, as following:
cat a.txt >> output.txt
As I tried, if cat failed, the output.txt will still be created, which isn't my expected. I know I could test as this:
if [ "$?" -ne "0"]; then
rm output.txt
fi
But this may cause some issues overhead when there's already such output.txt prior to my cat execution.
So I also need store the output.txt state before cat, if there's already such output.txt before cat execution, I should not rm output.txt by mistake... but there may still be problem on race condition, what if any other process create this output.txt right before my cat very closely?
So is there any simple way that, if the command fails, the redirection output.txt will be removed, or even not created?
Fixed output file names are bad news; don't use them.
You should probably redesign the processing so that you have a date-stamped file name. Failing that, you should use the mktemp command to create a temporary file, have the command you want executed write to that, and when the command is successful, you can move the temporary to the 'final' output — and you can automatically clean up the temporary on failure.
outfile="./output-$(date +%Y-%m-%d.%H:%M:%S).txt"
tmpfile="$(mktemp ./gadget-maker.XXXXXXXX)"
trap "rm -f '$tmpfile'; exit 1" 0 1 2 3 13 15
if cat a.txt > "$tmpfile"
then mv "$tmpfile" "$outfile"
else rm "$tmpfile"
fi
trap 0
You can simplify the outfile to output.txt if you insist (but it isn't safe). You can use any prefix you like with the mktemp command. Note that by creating the temporary file in the current directory, where the final output file will be created too, you avoid cross-device file copying at the mv phase of operations — it is a link() and an unlink() system call (or maybe even a rename() system call if such a thing exists on your machine; it does on Mac OS X) only.
You can't tell that the command has failed until it terminates, and by then it might have produced some output.
Probably a more useful condition is to avoid creating the output file until the command actually produces some output, and not worry about its status code.
This comes close:
command | { IFS= read -rn1 -d '' a &&
{ printf %s "$a" >> output.txt
cat >> output.txt
}
}
However, if the first character output by command is a NUL byte, the NUL won't be written to the output file. Since the extension of the output file is .txt, that's unlikely in this particular case, but it could be handled by adding the command
[[ -z $a ]] && printf '\0' >> output.txt
after the printf and before the cat.
I think this will work, check this out.
[ -e output.txt ] && (mv output.txt output.txt_bkp)
cat a.txt > /dev/null 2>&1;[ $? -eq 0 ] && (cat a.txt > output.txt)
another way as suggested by Jonathan,
[ -e output.txt ] && (mv output.txt output.txt_bkp)
if cat a.txt > /dev/null 2>&1
then
cat a.txt > output.txt
fi

crontab not executing complex bash script

SOLVED! add #!/bin/bash at the top of all my scripts in order to make use of bash extensions. Otherwise it restricts itself to POSIX shell syntax. Thanks Barmar!
Also, I'll add that I had trouble with gpg decryption not working from cronjob after I got it executing, and the answer was to add the --no-tty option (no terminal output) to the gpg command.
I am fairly new to linux, so bear with me...
I am able to execute a simple script with crontab -e when logged in as ubuntu:
* * * * * /ngage/extract/bin/echoer.sh
and this bash script simply prints output to a file:
echo "Hello" >> output.txt
But when I try to execute my more complex bash script in exactly the same way, it doesn't work:
* * * * * /ngage/extract/bin/superMasterExtract.sh
This script called into other bash scripts. There are 4 scripts in total, which 3 levels of hierarchy. It goes superMasterExtract > masterExtract > (decrypt, unzip)
Here is the code for superMasterExtract.sh (top level):
shopt -s nullglob # ignore empty file
cd /str/ftp
DIRECTORY='writeable'
for d in */ ; do # for all directories in /str/ftp
if [ -d "$d$DIRECTORY" ]; then # if the directory contains a folder called 'writeable'
files=($d$DIRECTORY/*)
dirs=($d$DIRECTORY/*/)
numdirs=${#dirs[#]}
numFiles=${#files[#]}
((numFiles-=$numdirs))
if [ $numFiles -gt 0 ]; then # if the folder has at least one file in it
bash /ngage/extract/bin/masterExtract.sh /str/ftp ${d:0:${#d} - 1} # execute this masterExtract bash script with two parameters passed in
fi
fi
done
masterExtract.sh:
DATE="$(date +"%m-%d-%Y_%T")"
LOG_FILENAME="log$DATE"
LOG_FILEPATH="/ngage/extract/logs/$2/$LOG_FILENAME"
echo "Log file is $LOG_FILEPATH"
bash /ngage/extract/bin/decrypt.sh $1 $2 $DATE
java -jar /ngage/extract/bin/sftp.jar $1 $2
bash /ngage/extract/bin/unzip.sh $1 $2 $DATE
java -jar /ngage/extract/bin/sftp.jar $1 $2
echo "Log file is $LOG_FILEPATH"
decrypt.sh:
shopt -s nullglob
UPLOAD_FILEPATH="$1/$2/writeable"
DECRYPT_FOLDER="$1/decryptedFiles/$2"
HISTORY_FOLDER="$1/encryptHistory/$2"
DONE_FOLDER="$1/doneFiles/$2"
LOG_FILENAME="log$3"
LOG_FILEPATH="/ngage/extract/logs/$2/$LOG_FILENAME"
echo "DECRYPT_FOLDER=$DECRYPT_FOLDER" >> $LOG_FILEPATH
echo "HISTORY_FOLDER=$HISTORY_FOLDER" >> $LOG_FILEPATH
cd $UPLOAD_FILEPATH
for FILE in *.gpg;
do
FILENAME=${FILE%.gpg}
echo ".done FILE NAME=$UPLOAD_FILEPATH/$FILENAME.done" >> $LOG_FILEPATH
if [[ -f $FILENAME.done ]]; then
echo "DECRYPTING FILE=$UPLOAD_FILEPATH/$FILE INTO $DECRYPT_FOLDER/$FILENAME" >> $LOG_FILEPATH
cat /ngage/extract/.sftpPasswd | gpg --passphrase-fd 0 --output "$DECRYPT_FOLDER/$FILENAME" --decrypt "$FILE"
mv $FILE $HISTORY_FOLDER/$FILE
echo "MOVING FILE=$UPLOAD_FILEPATH/$FILE INTO $HISTORY_FOLDER/$FILE" >> $LOG_FILEPATH
else
echo "Done file not found!" >> $LOG_FILEPATH
fi
done
cd $DECRYPT_FOLDER
for FILE in *
do
mv $FILE $DONE_FOLDER/$FILE
echo "DECRYPTED FILE=$DONE_FOLDER/$FILE" >> $LOG_FILEPATH
done
If anyone has a clue why it refuses to execute my more complicated script, I'd love to hear it. I have also tried setting some environment variables at the beginning of crontab as well:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/local/bin:/usr/bin
MAILTO=jgardnerx85#gmail.com
HOME=/
* * * * * /ngage/extract/bin/superMasterExtract.sh
Note, I don't know that these are the appropriate variables for my installation or my script. I just pulled them off other posts and tried it to no avail. If these aren't the correct environment variables, can someone tell me how I can deduce the right ones for my particular application?
You need to begin your script with
#!/bin/bash
in order to make use of bash extensions. Otherwise it restricts itself to POSIX shell syntax.

root running cron task can't read .txt file generated by www-data user

I have a simple php page that writes a file to my server.
// open new file
$filename = "$name.txt";
$fh = fopen($filename, "w");
fwrite($fh, "$name".";"."$abbreviation".";"."$uid".";");
fclose($fh);
I then have a cron job that I know runs as root as test that and need that.
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
fi
The cronjob is a bash script that can detect the file exists, but it can't seem to read the contents of the file.
#!/bin/bash
######################################################
#### Loop through the files and generate coincode ####
######################################################
for file in /home/test/customcoincode/queue/*
do
echo $file
chmod 777 $file
echo "read file"
while read -r coinfile; do
echo $coinfile
echo "Assign variables from file"
#############################################
#### Set the variables to from the file #####
#############################################
coinName=$(echo $coinfile | cut -f1 -d\;)
coinNameAbreviation=$(echo $coinfile | cut -f2 -d\;)
UId=$(echo $coinfile | cut -f3 -d\;)
done < $file
echo "`date +%H:%M:%S` - $coinName : Your Kryptocoin is being compiled!"
echo $file
echo "copy $coinName file to generated directory"
cp -b $file /home/test/customcoincode/generatedCoins/$coinName.txt
echo "`date +%H:%M:%S` : Delete queue file"
# rm -f $file
done
echo $file recognises the file exists
echo $coinfile is blank
Yet when I nano ./coinfile.txt in terminal I can see clearly there is text in there
I run ls -l and I see that the file has the permissions
-rw-r--r-- 1 www-data www-data
I was under the impression that this would still mean the file can be read by other users?
Do I need to be able to execute the file if i am opening it and reading the contents?
Any advice would be greatly appreciated. I can expand and show my code if you want, but it was working before when I called a bash script to write the file... and that time it would save the file under root user with rwx for most and then could be read. But this then caused other issues in the php page, so is not an option.
You have:
while read -r coinfile; do
...
I see no indication that you're reading from $file. The command
read -r coinfile
will simply read from standard input (the -r merely affects the treatment of backslashes). In a cron job, if I recall correctly, standard input is empty or unavailable, which would explain why $coinfile is empty.
If you actually do read from $file -- for example, if your real code looks something like:
while read -r coinfile; do
...
done <$file
then you need to show us your entire script, or at least a self-contained version of it that exhibits the problem. Actually, you need to show us your entire script whether that's the problem or not.
http://sscce.org/

Bash script does not continue to read the next line of file

I have a shell script that saves the output of a command that is executed to a CSV file. It reads the command it has to execute from a shell script which is in this format:
ffmpeg -i /home/test/videos/avi/418kb.avi /home/test/videos/done/418kb.flv
ffmpeg -i /home/test/videos/avi/1253kb.avi /home/test/videos/done/1253kb.flv
ffmpeg -i /home/test/videos/avi/2093kb.avi /home/test/videos/done/2093kb.flv
You can see each line is an ffmpeg command. However, the script just executes the first line. Just a minute ago it was doing nearly all of the commands. It was missing half for some reason. I edited the text file that contained the commands and now it will only do the first line. Here is my bash script:
#!/bin/bash
# Shell script utility to read a file line line.
# Once line is read it will run processLine() function
#Function processLine
processLine(){
line="$#"
START=$(date +%s.%N)
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
}
# Store file name
FILE=""
# get file name as command line argument
# Else read it from standard input device
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
# read $FILE using the file descriptors
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<$FILE
while read line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
BAKIFS=$ORIGIFS
exit 0
Thank you for any help.
UPDATE 2
Its the ffmpeg commands rather than the shell script that isn't working. But I should of been using just "\b" as Paul pointed out. I am also making use of Johannes's shorter script.
I think that should do the same and seems to be correct:
#!/bin/bash
CSVFILE=/tmp/file.csv
cat "$#" | while read line; do
echo "Executing '$line'"
START=$(date +%s)
eval $line &> /dev/null
END=$(date +%s)
let DIFF=$END-$START
echo "$line, $START, $END, $DIFF" >> "$CSVFILE"
echo "It took ${DIFF}s"
done
no?
ffmpeg reads STDIN and exhausts it. The solution is to call ffmpeg with:
ffmpeg </dev/null ...
See the detailed explanation here: http://mywiki.wooledge.org/BashFAQ/089
Update:
Since ffmpeg version 1.0, there is also the -nostdin option, so this can be used instead:
ffmpeg -nostdin ...
I just had the same problem.
I believe ffmpeg is responsible for this behaviour.
My solution for this problem:
1) Call ffmpeg with an "&" at the end of your ffmpeg command line
2) Since now the skript will not wait till completion of the ffmpeg process,
we have to prevent our script from starting several ffmpeg processes.
We achieve this goal by delaying the loop pass while there is at least
one running ffmpeg process.
#!/bin/bash
cat FileList.txt |
while read VideoFile; do
<place your ffmpeg command line here> &
FFMPEGStillRunning="true"
while [ "$FFMPEGStillRunning" = "true" ]; do
Process=$(ps -C ffmpeg | grep -o -e "ffmpeg" )
if [ -n "$Process" ]; then
FFMPEGStillRunning="true"
else
FFMPEGStillRunning="false"
fi
sleep 2s
done
done
I would add echos before and after the eval to see what it's about to eval (in case it's treating the whole file as one big long line) and after (in case one of the ffmpeg commands is taking forever).
Unless you are planning to read something from standard input after the loop, you don't need to preserve and restore the original standard input (though it is good to see you know how).
Similarly, I don't see a reason for dinking with IFS at all. There is certainly no need to restore the value of IFS before exit - this is a real shell you are using, not a DOS BAT file.
When you do:
read var1 var2 var3
the shell assigns the first field to $var1, the second to $var2, and the rest of the line to $var3. In the case where there's just one variable - your script, for example - the whole line goes into the variable, just as you want it to.
Inside the process line function, you probably don't want to throw away error output from the executed command. You probably do want to think about checking the exit status of the command. The echo with error redirection is ... unusual, and overkill. If you're sufficiently sure that the commands can't fail, then go ahead with ignoring the error. Is the command 'chatty'; if so, throw away the chat by all means. If not, maybe you don't need to throw away standard output, either.
The script as a whole should probably diagnose when it is given multiple files to process since it ignores the extraneous ones.
You could simplify your file handling by using just:
cat "$#" |
while read line
do
processline "$line"
done
The cat command automatically reports errors (and continues after them) and processes all the input files, or reads standard input if there are no arguments left. The use of double quotes around the variable means that it is passed as a single unit (and therefore unparsed into separate words).
The use of date and bc is interesting - I'd not seen that before.
All in all, I'd be looking at something like:
#!/bin/bash
# Time execution of commands read from a file, line by line.
# Log commands and times to CSV logfile "file.csv"
processLine(){
START=$(date +%s.%N)
eval "$#" > /dev/null
STATUS=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF, $STATUS" >> file.csv
echo "${DIFF}s: $STATUS: $line"
}
cat "$#" |
while read line
do
processLine "$line"
done

Resources