How to ignore rsync warning about vanished files? - linux

I get a lot of mails from cronjobs with rsync. And I've tried to ignore it with wrapper script like this:
#!/bin/bash
/usr/bin/rsync "$#"
e=$?
if test $e = 24; then
exit 0
fi
exit $e
And saved it like a /usr/bin/rsync-no24
After that, I changed my script for cronjob:
#!/bin/bash
SOURCE_BASE="/var/www/"
TARGETS="server30"
TARGET_DIR="/var/www/"
RSYNC_BIN="/usr/bin/rsync-no24"
RSYNC_OPTIONS="-aqqq"
/usr/bin/find ${SOURCE_BASE}/typo3temp ! -user www-data -exec chown -R www-data:www-data {} \;
#for SOURCE_DIR in fileadmin uploads typo3temp
#do
for TARGET_HOST in ${TARGETS}
do
${RSYNC_BIN} ${RSYNC_OPTIONS} ${SOURCE_BASE}/${SOURCE_DIR} ${TARGET_HOST}:${TARGET_DIR}/
done
#done
But anyway I still get mails from cron such as
file has vanished:
"/var/www/stage2/typo3temp/tx_ncstaticfilecache/OnlineBackup/index33.html.5"
How to ignore messages like this? Probably something wrong with wrapper script?
Thanks a lot.

Replace your /usr/bin/rsync-no24 with this:
#!/bin/bash
(rsync "$#"; if [ $? == 24 ]; then exit 0; else exit $?; fi) 2>&1 | grep -v 'vanished'
source
(on a side note, I don't think there's a difference between RSYNC_OPTIONS="-aqqq" and RSYNC_OPTIONS="-aq"

Related

How to develop a Condition to close program only when log file has been updated in Bash Script [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

Watch file to be updated [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

linux zip and exclude dir via bash/shell script

I am trying to write a bash/shell script to zip up a specific folder and ignore certain sub-dirs in that folder.
This is the folder I am trying to zip "sync_test5":
My bash script generates an ignore list (based on) and calls the zip function like this:
#!/bin/bash
SYNC_WEB_ROOT_BASE_DIR="/home/www-data/public_html"
SYNC_WEB_ROOT_BACKUP_DIR="sync_test5"
SYNC_WEB_ROOT_IGNORE_DIR="dir_to_ignore dir2_to_ignore"
ignorelist=""
if [ "$SYNC_WEB_ROOT_IGNORE_DIR" != "" ];
then
for ignoredir in $SYNC_WEB_ROOT_IGNORE_DIR
do
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/**\*"
done
fi
FILE="$SYNC_BACKUP_DIR/$DATETIMENOW.website.zip"
cd $SYNC_WEB_ROOT_BASE_DIR;
zip -r $FILE $SYNC_WEB_ROOT_BACKUP_DIR -x $ignorelist >/dev/null
echo "Done"
Now this script runs without error, however it is not ignoring/excluding the dirs I've specified.
So, I had the shell script output the command it tried to run, which was:
zip -r 12-08-2014_072810.website.zip sync_test5 -x sync_test5/dir_to_ignore/**\* sync_test5/dir2_to_ignore/**\*
Now If I run the above command directly in putty like this, it works:
So, why doesn't my shell script exclude working as intended? the command that is being executed is identical (in shell and putty directly).
Because backslash quotings in a variable after word splitting are not evaluated.
If you have a='123\4', echo $a would give
123\4
But if you do it directly like echo 123\4, you'd get
1234
Clearly the arguments you pass with the variable and without the variables are different.
You probably just meant to not quote your argument with backslash:
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/***"
Btw, what actual works is a non-evaluated glob pattern:
zip -r 12-08-2014_072810.website.zip sync_test5 -x 'sync_test5/dir_to_ignore/***' 'sync_test5/dir2_to_ignore/***'
You can verify this with
echo zip -r 12-08-2014_072810.website.zip sync_test5 -x sync_test5/dir_to_ignore/**\* sync_test5/dir2_to_ignore/**\*
And this is my suggestion:
#!/bin/bash
SYNC_WEB_ROOT_BASE_DIR="/home/www-data/public_html"
SYNC_WEB_ROOT_BACKUP_DIR="sync_test5"
SYNC_WEB_ROOT_IGNORE_DIR=("dir_to_ignore" "dir2_to_ignore")
IGNORE_LIST=()
if [[ -n $SYNC_WEB_ROOT_IGNORE_DIR ]]; then
for IGNORE_DIR in "${SYNC_WEB_ROOT_IGNORE_DIR[#]}"; do
IGNORE_LIST+=("$SYNC_WEB_ROOT_BACKUP_DIR/$IGNORE_DIR/***") ## "$SYNC_WEB_ROOT_BACKUP_DIR/$IGNORE_DIR/*" perhaps is enough?
done
fi
FILE="$SYNC_BACKUP_DIR/$DATETIMENOW.website.zip" ## Where is $SYNC_BACKUP_DIR set?
cd "$SYNC_WEB_ROOT_BASE_DIR";
zip -r "$FILE" "$SYNC_WEB_ROOT_BACKUP_DIR" -x "${IGNORE_LIST[#]}" >/dev/null
echo "Done"
This is what I ended up with:
#!/bin/bash
# This script zips a directory, excluding specified files, types and subdirectories.
# while zipping the directory it excludes hidden directories and certain file types
[[ "`/usr/bin/tty`" == "not a tty" ]] && . ~/.bash_profile
DIRECTORY=$(cd `dirname $0` && pwd)
if [[ -z $1 ]]; then
echo "Usage: managed_directory_compressor /your-directory/ zip-file-name"
else
DIRECTORY_TO_COMPRESS=${1%/}
ZIPPED_FILE="$2.zip"
COMPRESS_IGNORE_FILE=("\.git" "*.zip" "*.csv" "*.json" "gulpfile.js" "*.rb" "*.bak" "*.swp" "*.back" "*.merge" "*.txt" "*.sh" "bower_components" "node_modules")
COMPRESS_IGNORE_DIR=("bower_components" "node_modules")
IGNORE_LIST=("*/\.*" "\.* "\/\.*"")
if [[ -n $COMPRESS_IGNORE_FILE ]]; then
for IGNORE_FILES in "${COMPRESS_IGNORE_FILE[#]}"; do
IGNORE_LIST+=("$DIRECTORY_TO_COMPRESS/$IGNORE_FILES/*")
done
for IGNORE_DIR in "${COMPRESS_IGNORE_DIR[#]}"; do
IGNORE_LIST+=("$DIRECTORY_TO_COMPRESS/$IGNORE_DIR/")
done
fi
zip -r "$ZIPPED_FILE" "$DIRECTORY_TO_COMPRESS" -x "${IGNORE_LIST[#]}" # >/dev/null
# echo zip -r "$ZIPPED_FILE" "$DIRECTORY_TO_COMPRESS" -x "${IGNORE_LIST[#]}" # >/dev/null
echo $DIRECTORY_TO_COMPRESS "compressed as" $ZIPPED_FILE.
fi
After a few trial and error, I have managed to fix this problem by changing this line:
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/**\*"
to:
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/***"
Not sure why this worked, but it does :)

Shell Scripting: Print directory names and files with specifics

In my script, I am asking the user to input a directory and then list all the files in that specific directory. What I want to do with that is to make the display a little better in which I would be able to display a "/" if the item in the directory is another directory and if it is an executable file (not an executable directory), print with a **".
This is what I have:
echo “Directory: “
read thing
for var123 in $thing*
do
echo $var123
done
In a directory I have a few folders and a few scripts that have the execute permission. when I run the script I want to say
/folder1/subfolder1/
/folder1/subfolder2/
/folder1/file1*
/folder1/file2*
I am new to this and have no clue what I am doing. Any help will be greatly appreciated.
You might want to check and make sure the user inputs something that ends in a / first.
e.g.
[[ $thing =~ '/'$ ]] || thing="$thing/"
Also check if it exists
e.g.
[[ -d $thing ]] || exit 1
Then for checking if it's a directory use the -d test as above. To check if executable file use -x. So putting that all together, try:
#!/bin/bash
echo “Directory: “
read thing
[[ $thing =~ '/'$ ]] || thing="$thing/"
[[ -d $thing ]] || exit 1
for var123 in "$thing"*
do
if [[ -f $var123 && -x $var123 ]]; then
echo "$var123**"
elif [[ -d $var123 ]]; then
echo "$var123/"
else
echo "$var123"
fi
done
ls -F is your friend here - if you want to do it for the current directory:
ls -F
If you want to do it for all files & subfolders of the current directory:
find * -exec ls -Fd {} \;
... and for a given directory:
echo "Directory: "
read DIR
find $DIR/* -exec ls -Fd {} \;
Edit: ls -F will append a / to directories and a * to executables. If you want ** instead, just use sed to replace them:
find $DIR/* -exec ls -Fd {} \; | sed 's/\*$/&&/'
And this approach works in all shells, not just bash.

Bash Script if a file exists and larger than loop

*Note i edited this so my final functioning code is below
Ok so I'm writing a bash script to backup our mysql database to a directory, delete the oldest backup if 10 exist, and output the results of the backup to a log so I can further create alerts if it fails. Everything works great except the if loop to output the results, thanks again for the help guys code is below!
#! /bin/bash
#THis creates a variable with the date stamp to add to the filename
now=$(date +"%m_%d_%y")
#This moves the bash shell to the directory of the backups
cd /dbbkp/backups/
#Counts the number of files in the direstory with the *.sql extension and deletes the oldest once 10 is reached.
[[ $(ls -ltr *.sql | wc -l) -gt 10 ]] && rm $(ls -ltr *.sql | awk 'NR==1{print $NF}')
#Moves the bash shell to the mysql bin directory to run the backup script
cd /opt/GroupLink/everything_HelpDesk/mysql/bin/
#command to run and dump the mysql db to the directory
./mysqldump -u root -p dbname > /dbbkp/backups/ehdbkp_$now.sql --protocol=socket --socket=/tmp/GLmysql.sock --password=password
#Echo the results to the log file
#Change back to the directory you created the backup in
cd /dbbkp/backups/
#If loop to check if the backup is proper size and if it exists
if find ehdbkp_$now.sql -type f -size +51200c 2>/dev/null | grep -q .; then
echo "The backup has run successfully" >> /var/log/backups
else
echo "The backup was unsuccessful" >> /var/log/backups
fi
Alternatively, you could use stat instead of find.
if [ $(stat -c %s ehdbkp_$now 2>/dev/null || echo 0) -gt 51200 ]; then
echo "The backup has run successfully"
else
echo "The backup was unsuccessful"
fi >> /var/log/backups
Option -c %s tells stat to return the size of file in bytes. This will take care of both the presence of file and size greater than 51200. When the file is missing, stat will err out, thus we redirect error message to /dev/null. The logical or condition || will get executed only when the file is missing thus the comparison will make [ 0 -gt 100 ] false.
To check if the file exists and larger than 51200 bytes you could rewrite your if like this:
if find ehdbkp_$now -type f -size +51200c 2>/dev/null | grep -q .; then
echo "The backup has run successfully"
else
echo "The backup has was unsuccessful"
fi >> /var/log/backups
Other notes:
The find takes care two things at once: checks if file exists and size is greater than 51200.
We redirect stderr to /dev/null to hide the error message if the file doesn't exist.
If there was a file matching both conditions, then grep will match and exit with success, otherwise it will exit with failure
The final outcome of the grep is what decides the if condition
I moved the >> /var/log/backups after the closing fi, as it's equivalent this way and less duplication.
Btw if is NOT a loop, it's a conditional.
UPDATE
As #glennjackman pointed out, a better way to write the if, without grep:
if [[ $(find ehdbkp_$now -type f -size +51200c 2>/dev/null) ]]; then
...

Resources