Usage of envsubst combined with tee randomly results in an empty file - linux

I'm looking to understand what is going on in a simple script, that seems to produce random results.
What I am trying to do:
Replace variables in pre-existing files from the values defined in the environment.
This is done inside a Docker container with a bash script, that runs the command:
envsubst '$VAR1 $VAR2' < $FILE | tee $FILE
What happens:
Sometimes a $FILE in question has contents before the command, but contains nothing after the command.
How to reproduce the issue:
Dockerfile:
FROM debian:stretch
RUN apt-get update -qy
RUN apt-get install -qy gettext
COPY main-script /main-script
RUN chmod +x /main-script
ENTRYPOINT [ "/main-script" ]
Bash script:
#!/bin/bash
mkdir -p /test
export TEST1=1
export TEST2=2
export TEST3=3
for I in {1..300} ; do
echo '$TEST1 $TEST2 $TEST3' > /test/file-$I
done
for FILE in /test/file-* ; do
envsubst < $FILE | tee $FILE
done
for FILE in /test/file-* ; do
if [[ -z "$(cat $FILE)" ]]; then
echo "$FILE is empty!"
FAIL=1
fi
done
if [[ -n "$FAIL" ]]; then
exit 2
fi
Output looks something like this:
...
/test/file-11 is empty!
/test/file-180 is empty!
/test/file-183 is empty!
/test/file-295 is empty!

Pipes are asynchronous, and you've introduced a race condition. You can't predict if envsubst reads from $FILE before or after tee truncates it.
The correct approach is to write the changes to a temporary file, then replace the original with the temporary file after that has succeeded.
tmp=$(mktemp)
envsubst < "$FILE" > "$tmp" && mv "$tmp" "$FILE"

Related

How to develop a Condition to close program only when log file has been updated in Bash Script [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

move or copy a file if that file exists?

I am trying to run a command
mv /var/www/my_folder/reports.html /tmp/
it is running properly. But I want to put a condition like if that file exists then only run the command. Is there anything like that?
I can put a shell file instead.
for shell a tried below thing
if [ -e /var/www/my_folder/reports.html ]
then
mv /var/www/my_folder/reports.html /tmp/
fi
But I need a command. Can some one help me with this?
Moving the file /var/www/my_folder/reports.html only if it exists and regular file:
[ -f "/var/www/my_folder/reports.html" ] && mv "/var/www/my_folder/reports.html" /tmp/
-f - returns true value if file exists and regular file
if exist file and then move or echo messages through standard error output
test -e /var/www/my_folder/reports.html && mv /var/www/my_folder/reports.html /tmp/ || echo "not existing the file" >&2
You can do it simply in a shell script
#!/bin/bash
# Check for the file
ls /var/www/my_folder/ | grep reports.html > /dev/null
# check output of the previous command
if [ $? -eq 0 ]
then
# echo -e "Found file"
mv /var/www/my_folder/reports.html /tmp/
else
# echo -e "File is not in there"
fi
Hope it helps
Maybe your use case is "Create if not exist, then copy always".
then: touch myfile && cp myfile mydest/

Using inotifywait to process two files in parallel

I am using:
inotifywait -m -q -e close_write --format %f . | while IFS= read -r file; do
cp -p "$file" /path/to/other/directory
done
to monitor a folder for file completion, then moving it to another folder.
Files are made in pairs but at separate times, ie File1_001.txt is made at 3pm, File1_002.txt is made at 9pm. I want to monitor for the completion of BOTH files, then launch a script.
script.sh File1_001.txt File1_002.txt
So I need to have another inotifywait command or a different utility, that can also identify that both files are present and completed, then start the script.
Does anyone know how to solve this problem?
I found a Linux box with inotifywait installed on it, so now I understand what it does and how it works. :)
Is this what you need?
#!/bin/bash
if [ "$1" = "-v" ]; then
Verbose=true
shift
else
Verbose=false
fi
file1="$1"
file2="$2"
$Verbose && printf 'Waiting for %s and %s.\n' "$file1" "$file2"
got1=false
got2=false
while read thisfile; do
$Verbose && printf ">> $thisfile"
case "$thisfile" in
$file1) got1=true; $Verbose && printf "... it's a match!" ;;
$file2) got2=true; $Verbose && printf "... it's a match!" ;;
esac
$Verbose && printf '\n'
if $got1 && $got2; then
$Verbose && printf 'Saw both files.\n'
break
fi
done < <(inotifywait -m -q -e close_write --format %f .)
This runs a single inotifywait but parses its output in a loop that exits when both files on the command line ($1 and $2) are seen to have been updated.
Note that if one file is closed and then later is reopened while the second file is closed, this script obviously will not detect the open file. But that may not be a concern in your use case.
Note that there are many ways of building a solution -- I've shown you only one.

linux zip and exclude dir via bash/shell script

I am trying to write a bash/shell script to zip up a specific folder and ignore certain sub-dirs in that folder.
This is the folder I am trying to zip "sync_test5":
My bash script generates an ignore list (based on) and calls the zip function like this:
#!/bin/bash
SYNC_WEB_ROOT_BASE_DIR="/home/www-data/public_html"
SYNC_WEB_ROOT_BACKUP_DIR="sync_test5"
SYNC_WEB_ROOT_IGNORE_DIR="dir_to_ignore dir2_to_ignore"
ignorelist=""
if [ "$SYNC_WEB_ROOT_IGNORE_DIR" != "" ];
then
for ignoredir in $SYNC_WEB_ROOT_IGNORE_DIR
do
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/**\*"
done
fi
FILE="$SYNC_BACKUP_DIR/$DATETIMENOW.website.zip"
cd $SYNC_WEB_ROOT_BASE_DIR;
zip -r $FILE $SYNC_WEB_ROOT_BACKUP_DIR -x $ignorelist >/dev/null
echo "Done"
Now this script runs without error, however it is not ignoring/excluding the dirs I've specified.
So, I had the shell script output the command it tried to run, which was:
zip -r 12-08-2014_072810.website.zip sync_test5 -x sync_test5/dir_to_ignore/**\* sync_test5/dir2_to_ignore/**\*
Now If I run the above command directly in putty like this, it works:
So, why doesn't my shell script exclude working as intended? the command that is being executed is identical (in shell and putty directly).
Because backslash quotings in a variable after word splitting are not evaluated.
If you have a='123\4', echo $a would give
123\4
But if you do it directly like echo 123\4, you'd get
1234
Clearly the arguments you pass with the variable and without the variables are different.
You probably just meant to not quote your argument with backslash:
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/***"
Btw, what actual works is a non-evaluated glob pattern:
zip -r 12-08-2014_072810.website.zip sync_test5 -x 'sync_test5/dir_to_ignore/***' 'sync_test5/dir2_to_ignore/***'
You can verify this with
echo zip -r 12-08-2014_072810.website.zip sync_test5 -x sync_test5/dir_to_ignore/**\* sync_test5/dir2_to_ignore/**\*
And this is my suggestion:
#!/bin/bash
SYNC_WEB_ROOT_BASE_DIR="/home/www-data/public_html"
SYNC_WEB_ROOT_BACKUP_DIR="sync_test5"
SYNC_WEB_ROOT_IGNORE_DIR=("dir_to_ignore" "dir2_to_ignore")
IGNORE_LIST=()
if [[ -n $SYNC_WEB_ROOT_IGNORE_DIR ]]; then
for IGNORE_DIR in "${SYNC_WEB_ROOT_IGNORE_DIR[#]}"; do
IGNORE_LIST+=("$SYNC_WEB_ROOT_BACKUP_DIR/$IGNORE_DIR/***") ## "$SYNC_WEB_ROOT_BACKUP_DIR/$IGNORE_DIR/*" perhaps is enough?
done
fi
FILE="$SYNC_BACKUP_DIR/$DATETIMENOW.website.zip" ## Where is $SYNC_BACKUP_DIR set?
cd "$SYNC_WEB_ROOT_BASE_DIR";
zip -r "$FILE" "$SYNC_WEB_ROOT_BACKUP_DIR" -x "${IGNORE_LIST[#]}" >/dev/null
echo "Done"
This is what I ended up with:
#!/bin/bash
# This script zips a directory, excluding specified files, types and subdirectories.
# while zipping the directory it excludes hidden directories and certain file types
[[ "`/usr/bin/tty`" == "not a tty" ]] && . ~/.bash_profile
DIRECTORY=$(cd `dirname $0` && pwd)
if [[ -z $1 ]]; then
echo "Usage: managed_directory_compressor /your-directory/ zip-file-name"
else
DIRECTORY_TO_COMPRESS=${1%/}
ZIPPED_FILE="$2.zip"
COMPRESS_IGNORE_FILE=("\.git" "*.zip" "*.csv" "*.json" "gulpfile.js" "*.rb" "*.bak" "*.swp" "*.back" "*.merge" "*.txt" "*.sh" "bower_components" "node_modules")
COMPRESS_IGNORE_DIR=("bower_components" "node_modules")
IGNORE_LIST=("*/\.*" "\.* "\/\.*"")
if [[ -n $COMPRESS_IGNORE_FILE ]]; then
for IGNORE_FILES in "${COMPRESS_IGNORE_FILE[#]}"; do
IGNORE_LIST+=("$DIRECTORY_TO_COMPRESS/$IGNORE_FILES/*")
done
for IGNORE_DIR in "${COMPRESS_IGNORE_DIR[#]}"; do
IGNORE_LIST+=("$DIRECTORY_TO_COMPRESS/$IGNORE_DIR/")
done
fi
zip -r "$ZIPPED_FILE" "$DIRECTORY_TO_COMPRESS" -x "${IGNORE_LIST[#]}" # >/dev/null
# echo zip -r "$ZIPPED_FILE" "$DIRECTORY_TO_COMPRESS" -x "${IGNORE_LIST[#]}" # >/dev/null
echo $DIRECTORY_TO_COMPRESS "compressed as" $ZIPPED_FILE.
fi
After a few trial and error, I have managed to fix this problem by changing this line:
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/**\*"
to:
ignorelist="$ignorelist $SYNC_WEB_ROOT_BACKUP_DIR/$ignoredir/***"
Not sure why this worked, but it does :)

Bash script: Syntax error in conditional expression

I'm new around the neighborhood and stuck with a syntax error. Please take a look and maybe someone can assist. I'm trying to run the following script:
#!/bin/bash
main () {
dpkg -query -s $1 &> /tmp/pkg_verify
if grep -q 'not installed' /tmp/verify
then
echo -e "\e[31m$1 is not installed. installing..\e[0m"
apt-get install $1
echo -e "\e[31m$1 is not installed and ready to use\e[0m"
else
echo -e "\e[31m$1 is already installed\e[0m"
fi
rm -f /tmp/pkg_verify
for test in $#; do main $test; shift; done
echo -e "\e[31mDone\e[0m"
}
for test in $#; do main $test; shift; done
echo -e "\e[31mDone\e[0m"
But when I try to execute it I'm facing with endless loop:
grep: /tmp/verify: No such file or directory
16 is already installed
I truly tried to find the answer, tried to change the if to couple of different forms but with out any success. Does any one have an idea why that is? What should I change so that the script can run?
Thanks in advance to all the helpers.
You have two else following each other. That can't work. It's either elif condition or just a single else.
The infinite loop is caused by main calling itself recursively.
And third, it's probably a bug to shift when iterating with for i in "$#".
To debug a script (free of syntax errors) use set -x near the beginning.
Replace this line:
if [[ -z `grep 'not installed' /tmp/pkg_verify` ]]
with this if condition:
if grep -q 'not installed' /tmp/pkg_verify
Full Script:
main () {
dpkg-query -s "$1" > /tmp/pkg_verify
if grep -q 'not installed' /tmp/pkg_verify
then
echo -e "\e[31m$1 is not installed. installing..\e[0m"
apt-get install "$1"
echo -e "\e[31m$1 is not installed and ready to use\e[0m"
else
echo -e "\e[31m$1 is already installed\e[0m"
fi
}
rm -f /tmp/pkg_verify
for test in $#; do main "$test"; done
echo -e "\e[31mDone\e[0m"

Resources