I am learning shell programming from the very basics using the book called Beginning Linux Programming (4th Edition). I am confused by this script with an until-clause:
#!/bin/bash
until who | grep "$1" > /dev/null
do
sleep 60
done
# Now ring the bell and announce the unexpected user.
echo -e '\a'
echo "***** $1 has just logged in *****"
exit 0
My quesiton is what is who | grep "$1" > /dev/null used for here? Why redirect the grep output to /dev/null?
The 'until' loop is used to test a condition, as you mentioned, and will run all the 'do|done' block until the condition present becomes true. In other words, it only executes the code block when the condition present is FALSE, and runs it until it becomes true. The script you are testing is useful for catching a logged in user that you pass as a parameter to the script (hence, the grep "$1", being $1 a positional parameter). It will sleep for a minute (sleep 60) until that user logs in to the system, and then it will exit the loop and do all the '$1 has just logged in' stuff. The redirection of grep output to /dev/null is used to not display the output of the grep comand (you could have used grep -q "$1" and that will achieve the same effect).
Hope to have clarified your doubts.
while and until (and, admittedly, if) look at the exit code of the test, not at any text that may or may not be generated on stdout (or stderr).
I suspect the reason redirection to /dev/null has been used is because the command only generates output if there is a match, most of the time there is (admittedly) none, but when there is, you're not interested in seeing the result.
Related
The company I work to has a crontab set to run a given Shell Script every few minutes to perform certain complex operations without the users' intervention. This script basically executes multiple Perl scripts in a sequence, checking first if they are not running already, using the following structure as many times as there are customers:
for i in `seq 1 20`;
do
ps ax | grep ourFile10000008.p | grep pl 2>> /dev/null >> $LOG
if [ $? -eq 1 ] ; then
cd /path/to/the/script
perl ourFile10000008.pl 10000008 & 2>> $LOG
fi
ps ax | grep ourFile10000009.p | grep pl 2>> /dev/null >> $LOG
if [ $? -eq 1 ] ; then
cd /path/to/the/script
perl ourFile10000009.pl 10000009 & 2>> $LOG
fi
# (and so on, and so forth...)
done
This kind of works, except for the fact that there are now dozens of "ourFile" Perl script in our /path/to/the/script folder, and they are exact copies of each other! Every time a new customer comes online, we need to create a new replica, which makes maintaining this structure very hard to say the least.
I'm trying to make this structure run on a single file (named here as [theOneFile.pl]) that's another copy of those scripts but is called every time with a new argument. This works, but now I have to make sure I'm only running this file once per argument passed.
After some research, and thanks to This answer, I have successfully determined the argument behind a running [theOneFile.pl] through pgrep -af theOneFile.pl | tr '\000' ' '| awk '{print $4}' >> $LOG. However, this gives me a list of results to content with. To keep today's logic as intact as possible, I'm trying to determine only if there is one of these processes running with one specific argument at that given time (eg. theOneFile.pl 10000009), but I'm not sure how to do so. Any ideas?
pgref -f (which you are using) does match the pattern to the whole command line of a process, not just the process name. That said, you can use:
arg="foo"
pgrep -f "theOneFile.pl.*${arg}"
Well, the pgrep approach is prone to race conditions. Better would be to to change the script itself to use an exclusive lock - per argument.
please bear with me if my terminology or syntax is less than stellar (still learning). I currently have a simple bash script that checks the arguments of the command and outputs files names with matching text. This part of my script works correctly via a grep command and piped to xargs for proper formatting.
When running the script, I run through a simple loop to check if the value is null and then move to running my variable/search if not.
My question is: Is it possible to have this script output via stdout AND also save a new file each time it is run with the user input and date/time? (but not overwrite) EX: report-bob-0729161500.rpt
I saw same other suggestions to use tee with the command, but I was trying to get it to work within the script. Similarly, another suggestion stated to utilize exec > >(tee -i logfile.txt), but I am unsure how to properly format this to include the date/time and $1 input into new files each time the script is executed.
Any help or suggested resources?
Thank you.
SEARCH=`[search_variable]`
if [ -z "$SEARCH" ]
then
echo "$1 not found."
else
echo -e "REPORT LISTING\n\n"
echo "$SEARCH"
fi
EDIT: I did try simply piping the echo statements to the tee command, which does work. However, I am still curious if anyone has other suggestions to accomplish this same task via alternative methods. Thank you.
With echo statements piped to tee:
SEARCH=`[search_variable]`
DATE=`date +"%m%d%y%k%M"`
if [ -z "$SEARCH" ]
then
echo "$1 not found."
else
echo -e "REPORT LISTING\n\n" | tee tps-list-$1-$DATE.rpt
echo "$SEARCH" | tee tps-list-$1-$DATE.rpt
fi
If you want to do it within the script, why then not just write to
both standard output and the file (using append where appropriate?).
Maybe a bit more writing, but it gives complete control.
Leon
Could someone help me under stand the condition ls /etc/*release 1>/dev/null 2>&1 that's contained in the code:
if ls /etc/*release 1>/dev/null 2>&1; then
echo "<h2>System release info</h2>"
echo "<pre>"
for i in /etc/*release; do
# Since we can't be sure of the
# length of the file, only
# display the first line.
head -n 1 $i
done
uname -orp
echo "</pre>"
fi
I pretty much don't understand any of that line but specifically what I wanted to know was:
Why dose it not have to use the 'test' syntax i.e. [ expression ]?
The spacing in the condition also confuses, is 1>/dev/null a variable in the ls statement?
what is 2>&1?
I understand the purpose of this statement, which is; if there exists a file with release in it's name under the /etc/ directory the statement will continue, I just don't understand how this achieves this.
Thanks for you help
[ isn't a special character, it's a command (/bin/[ or /usr/bin/[, usually a link to test). That means
if [ ...
if test ...
are the same. For this to work, test ignores ] as last argument if it's being called [.
if simply responds to the exit code of the command it invokes. An exit code of 0 means success or "true".
1>/dev/null 2>&1 redirects stdout (1) to the device /dev/null and then stderr (2) to stdout which means the command can't display and output or errors on the terminal.
Since stdout isn't a normal file or device, you have to use >& for the redirection.
At first glance, one would think that if [ -e /etc/*release ] would be a better solution but test -e doesn't work with patterns.
The test programm just evaluate its arguments and return a code 0 or 1 to tell whether it was true or not.
But you can use any shell commands/function with a if. It will do the then part if the return code ($?) was 0.
So, here, we look if ls return 0 (a file matched), or not.
So, in the end, it's equivalent to write if [ -e /etc/*release ] ; then, which is more "shell-liked".
The last two statements 1>/dev/null and 2>&1 are just here to avoid displaying the output of the ls
1>/dev/null redirect stdout to /dev/null, so the standard out is not shown
2>&1 redirect stderr to stdout. Here, stdout is redirected to /dev/null, so everything is redirected to /dev/null
I read the answer for this issue from this link
in Stackoverflow.com. But I am so new in writing shell script that I did something wrong. The following are my scripts:
testscript:
#!/bin/csh -f
pid=$(ps -opid= -C csh testscript1)
while [ -d /proc/$pid ] ; do
sleep 1
done && csh testscript2
exit
testscript1:
#!/bin/csh -f
/usr/bin/firefox
exit
testscript2:
#!/bin/csh -f
echo Done
exit
The purpose is for testscript to call testscript1 first; once testscript1 already finish (which means the firefox called in script1 is closed) testscript will call testscript2. However I got this result after running testscript:
$ csh testscript
Illegal variable name.
Please help me with this issue. Thanks ahead.
I believe this line is not CSH:
pid=$(ps -opid= -C csh testscript1)
In general in csh you define variables like this:
set pid=...
I am not sure what the $() syntax is, perhaps back ticks woudl work as a replacement:
set pid=`ps -opid= -C csh testscript1`
Perhaps you didn't notice that the scripts you found were written for bash, not csh, but
you're trying to process them with the csh interpreter.
It looks like you've misunderstood what the original code was trying to do -- it was
intended to monitor an already-existing process, by looking up its process id using the process name.
You seem to be trying to start the first process from inside the ps command. But
in that case, there's no need for you to do anything so complicated -- all you need
is:
#!/bin/csh
csh testscript1
csh testscript2
Unless you go out of your way to run one of the scripts in the background,
the second script will not run until the first script is finished.
Although this has nothing to do with your problem, csh is more oriented toward
interactive use; for script writing, it's considered a poor choice, so you might be
better off learning bash instead.
Try,
below script will check testscript1's pid, if it is not found then it will execute testscirpt2
sp=$(ps -ef | grep testscript1 | grep -v grep | awk '{print $2}')
/bin/ls -l /proc/ | grep $sp > /dev/null 2>&1 && sleep 0 || /bin/csh testscript2
Since cruise control is full of bugs that have wasted my entire week, I have decided the existing shell scripts I have are simpler and thus better.
Here is what I have so far
svn update /var/www/k12/
#svn log --revision "HEAD" /var/www/code/ | head -2 | tail -1 | awk '{print $1}' > /var/www/path/version.txt
# upload the files
rsync -ar --verbose --stats --progress --delete --exclude=*.svn /var/www/code/ example.com:/home/path
# bring database up to date
ssh example.com 'php /path/tasks/dbrefactor.php'
# notify me
ssh example.com 'php /path/tasks/build.php'
Only thing is the other day I changed the paths and forgot to update the rsync call. As a result the "notify me" step ran several times while I was figuring stuff out.
I know in linux you can do command1 && command2 and if command 1 "fails" command2 will not run, but how can I observe the "failure/success" exit codes for debugging purposes. Some of the scripts I wrote myself and I'm sure I will need to do something special.
The best option, especially for unattended scripts, is to set the -e shell option:
#!/bin/sh -e
or
set -e
This will cause the shell to stop executing if any (untested) command exits with a nonzero error code.
-e Exit immediately if a simple command (see SHELL GRAMMAR
above) exits with a non-zero status. The shell does not
exit if the command that fails is part of an until or
while loop, part of an if statement, part of a && or ||
list, or if the command's return value is being inverted
via !. A trap on ERR, if set, is executed before the
shell exits.
The exit code of a previous process happens to be in $? variable right after its execution. Usually (that's not required, but it's the convention everyone follows) the exit code of a successful command will be equal to 0, and any other value means an error.
Remember of the caveats! One of them is that after these commands:
svn log --revision "HEAD" /var/www/code/ | head -2 | tail -1 | awk '{print $1}'
echo "$?"
the zero result would most likely be returned, because in the $? the return code of awk is contained. To avoid it, set the pipefail option somewhere above the code:
set -o pipefail 1
The return value of the last-run command is stored in the variable $?. You can use that to determine which command to run next. Overview of special variables.
i think $? contains the last exit code
if [[ -z $? ]]
then
# notify me
ssh example.com 'php /path/tasks/build.php'
fi
I would suggest you can use the exit non zero at the points where the failure is expected and before processing step further you will check
if [ $? -neq 0 ]
then there is a failure.
The $? will always return a non zero number if the last process does not executed successfully.