shell script error - linux

I have the following line in a shell script:
if [ -f /etc/init.d/tomcat6 && ps -C java|grep -qs 'java' ]; then
which throws up the following error when I try to run it:
line 12: [: missing `]'
I have a feeling that this is an encoding issue as I've been editing the file in Notepadd++ on a windows xp pc, I've ensured I've set the encoding to encode in UTF-8 without BOM and that all the line endings are linux style yet I still receive this error.
Can anyone help?
Thanks

Try
if [ -f /etc/init.d/tomcat6 ] && ps -C java | grep -qs 'java'; then
...
fi
[ is basically an alias for the test command. test does not know anything about an argument ps. Alternatively you may use test explicitely (just to clarify syntax):
if test -f /etc/init.d/tomcat6 && ps -C java | grep -qs 'java'; then
...
fi
If you use [ instead of test, you are forced to end the expression with ].

The && ends your [ command.
if [ -f /etc/init.d/tomcat6 ] && ps -C java | grep -qs 'java'; then

The syntax for and is -a.
You need to run ps -C java|grep -qs 'java', it is currently evaluated as an expression. Try this:
if [ -f /etc/init.d/tomcat6 -a $(ps -C java|grep -qs 'java') ]; then

Related

How to develop a Condition to close program only when log file has been updated in Bash Script [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

bash script porting issue related to script program

I am trying to port an existing bash script to Solaris and FreeBSD. It works fine on Fedora and Ubuntu.
This bash script uses the following set of commands to flush the output to the temporary file.
file=$(mktemp)
# record test_program output into a temp file
script -qfc "test_program arg1" "$file" </dev/null &
The script program does not have -qfc options on FreeBSD and Solaris. On Solaris and FreeBSD, script program only has -a option. I have done the following until now:
1) update to latest version of bash. This did not help.
2) Try to find out where exactly is the source code of "script" program is. I could not find it either.
Can somebody help me out here?
script is a standalone program, not part of the shell, and as you noticed, only the -a flag is available in all variants. The FreeBSD version supports something similar to -f (-F <file>) and doesn't need -c.
Here's an ugly but more portable solution:
buildsh() {
cat <<-!
#!/bin/sh
SHELL="$SHELL" exec \\
!
# Build quoted argument list
while [ $# != 0 ]; do echo "$1"; shift; done |
sed 's/'\''/'\'\\\\\'\''/g;s/^/'\''/;s/$/'\''/;!$s/$/ \\/'
}
# Build a shell script with the arguments and run it within `script`
record() {
local F t="$(mktemp)" f="$1"
shift
case "$(uname -s)" in
Linux) F=-f ;;
FreeBSD) F=-F ;;
esac
buildsh "$#" > "$t" &&
chmod 500 "$t" &&
SHELL="$t" script $F "$f" /dev/null
rm -f "$t"
sed -i '1d;$d' "$f" # Emulate -q
}
file=$(mktemp)
# record test_program output into a temp file
record "$file" test_program arg1 </dev/null &

Watch file to be updated [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

Bash Script : Unwanted Output

I have this simple bash script:
I run ns simulator on each file passed in argument where last argument is some text string to search for.
#!/bin/bash
nsloc="/home/ashish/ns-allinone-2.35/ns-2.35/ns"
temp="temp12345ashish.temp"
j=1
for file in "$#"
do
if [ $j -lt $# ]
then
let j=$j+1
`$nsloc $file > $temp 2>&1`
if grep -l ${BASH_ARGV[0]} $temp
then
echo "$file Successful"
fi
fi
done
I expected:
file1.tcl Successful
I am getting:
temp12345ashish.temp
file1.tcl Successful
When i run the simulator command myself on the terminal i do not get the file name to which output is directed.
I am not getting from where this first line of output is getting printed.
Please explain it.
Thanks in advance.
See man grep, and see specifically the explanation of the -l option.
In your script (above), you are using -l, so grep is telling you (as instructed) the filename where the match occurred.
If you don't want to see the filename, don't use -l, or use -q with it also. Eg:
grep -ql ${BASH_ARGV[0]} $temp
Just silence the grep:
if grep -l ${BASH_ARGV[0]} $temp &> /dev/null

bash - errors trying to pipe commands to run to separate function

I'm trying to get this function for making it easy to parallelize my bash scripts working. The idea is simple; instead of running each command sequentially, I pipe the command I want to run to this function and it does while read line; run the jobs in the bg for me and take care of logistics.... it doesn't work though. I added set -x by where stuff's executed and it looks like I'm getting weird quotes around the stuff I want executed... what should I do?
runParallel () {
while read line
do
while [ "`jobs | wc -l`" -eq 8 ]
do
sleep 2
done
{
set -x
${line}
set +x
} &
done
while [ "`jobs | wc -l`" -gt 0 ]
do
sleep 1
jobs >/dev/null 2>/dev/null
echo sleeping
done
}
for H in `ypcat hosts | grep fmez | grep -v mgmt | cut -d\ -f2 | sort -u`
do
echo 'ping -q -c3 $H 2>/dev/null 1>/dev/null && echo $H - UP || echo $H - DOWN'
done | runParallel
When I run it, I get output like the following:
> ./myscript.sh
+ ping -q -c3 '$H' '2>/dev/null' '1>/dev/null' '&&' echo '$H' - UP '||' echo '$H' - DOWN
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
+ set +x
sleeping
>
The quotes in the set -x output are not the problem, at most they are another result of the problem. The main problem is that ${line} is not the same as eval ${line}.
When a variable is expanded, the resulting words are not treated as shell reserved constructs. And this is expected, it means that eg.
A="some text containing > ; && and other weird stuff"
echo $A
does not shout about invalid syntax but prints the variable value.
But in your function it means that all the words in ${line}, including 2>/dev/null and the like, are passed as arguments to ping, which set -x output nicely shows, and so ping complains.
If you want to execute from variables complicated commandlines with redirections and conditionals, you will have to use eval.
If I'm understanding this correctly, you probably don't want single quotes in your echo command. Single quotes are literal strings, and don't interpret your bash variable $H.
Like many users of GNU Parallel you seem to have written your own parallelizer.
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
cat hosts | parallel -j8 'ping -q -c3 {} 2>/dev/null 1>/dev/null && echo {} - UP || echo {} - DOWN'
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Put your command in an array.

Resources