Searching for a substring in a bash script will not work - linux

I have been writing a bash script to call in my .bashrc file to print the results of whatis for a random command in my /usr/bin folder and wanted to exclude commands that returned "nothing appropriate" in the result and even if I use grep, wc, expr, ==, nothing seems to work. I have pretty much used every example here, and here with no progress. This is what I have so far but failes to do what I want when it finds somthing that contains "nothing appropriate." If anyone could figure out how to get it to work or what a good solution would be in this situation I would be greatfull.
#! /bin/bash
echo "Did you know that:";
while :
do
RESULT=$(whatis $(ls /usr/bin | shuf -n 1))
if [[ $RESULT != *"nothing appropriate"* ]]
then
echo $RESULT
break
fi
done

whatis prints the nothing appropriate message on the standard error stream. This stream is not caught by the $( ). This is the reason of your issue.
This is a way to fix it:
#! /bin/bash
echo "Did you know that:";
while :
do
RESULT=$(whatis $(ls /usr/bin | shuf -n 1) 2>&1 | cat - )
if [[ $RESULT != *"nothing appropriate"* ]]
then
echo $RESULT
break
fi
done
The 2>&1 | cat - addition does the trick

Related

Unable to array values outside of function in shell script [duplicate]

Please explain to me why the very last echo statement is blank? I expect that XCODE is incremented in the while loop to a value of 1:
#!/bin/bash
OUTPUT="name1 ip ip status" # normally output of another command with multi line output
if [ -z "$OUTPUT" ]
then
echo "Status WARN: No messages from SMcli"
exit $STATE_WARNING
else
echo "$OUTPUT"|while read NAME IP1 IP2 STATUS
do
if [ "$STATUS" != "Optimal" ]
then
echo "CRIT: $NAME - $STATUS"
echo $((++XCODE))
else
echo "OK: $NAME - $STATUS"
fi
done
fi
echo $XCODE
I've tried using the following statement instead of the ++XCODE method
XCODE=`expr $XCODE + 1`
and it too won't print outside of the while statement. I think I'm missing something about variable scope here, but the ol' man page isn't showing it to me.
Because you're piping into the while loop, a sub-shell is created to run the while loop.
Now this child process has its own copy of the environment and can't pass any
variables back to its parent (as in any unix process).
Therefore you'll need to restructure so that you're not piping into the loop.
Alternatively you could run in a function, for example, and echo the value you
want returned from the sub-process.
http://tldp.org/LDP/abs/html/subshells.html#SUBSHELL
The problem is that processes put together with a pipe are executed in subshells (and therefore have their own environment). Whatever happens within the while does not affect anything outside of the pipe.
Your specific example can be solved by rewriting the pipe to
while ... do ... done <<< "$OUTPUT"
or perhaps
while ... do ... done < <(echo "$OUTPUT")
This should work as well (because echo and while are in same subshell):
#!/bin/bash
cat /tmp/randomFile | (while read line
do
LINE="$LINE $line"
done && echo $LINE )
One more option:
#!/bin/bash
cat /some/file | while read line
do
var="abc"
echo $var | xsel -i -p # redirect stdin to the X primary selection
done
var=$(xsel -o -p) # redirect back to stdout
echo $var
EDIT:
Here, xsel is a requirement (install it).
Alternatively, you can use xclip:
xclip -i -selection clipboard
instead of
xsel -i -p
I got around this when I was making my own little du:
ls -l | sed '/total/d ; s/ */\t/g' | cut -f 5 |
( SUM=0; while read SIZE; do SUM=$(($SUM+$SIZE)); done; echo "$(($SUM/1024/1024/1024))GB" )
The point is that I make a subshell with ( ) containing my SUM variable and the while, but I pipe into the whole ( ) instead of into the while itself, which avoids the gotcha.
#!/bin/bash
OUTPUT="name1 ip ip status"
+export XCODE=0;
if [ -z "$OUTPUT" ]
----
echo "CRIT: $NAME - $STATUS"
- echo $((++XCODE))
+ export XCODE=$(( $XCODE + 1 ))
else
echo $XCODE
see if those changes help
Another option is to output the results into a file from the subshell and then read it in the parent shell. something like
#!/bin/bash
EXPORTFILE=/tmp/exportfile${RANDOM}
cat /tmp/randomFile | while read line
do
LINE="$LINE $line"
echo $LINE > $EXPORTFILE
done
LINE=$(cat $EXPORTFILE)

Getting out of tail -f in shell script

I cant seem to make this work.
This is the script.
tail -fn0 nohup.out | while read line; do
if [[ "${line}" =~ ".*ERIKA.*" ]]; then
echo "match found"
break
fi
done
echo "Search done"
The code echo "Search done" does not run even after a match has been found.
I just want the rest of the code to be ran when a match has been found.
I have not made it possible yet.
Sorry, I am new with log monitoring.
Is there any workaround with this?
I am gonna run the script via Jenkins so, the code should be free flowing
and should not require any user interaction.
Please help, thanks.
You've got a couple of issues here:
tail is going to keep running until it fails to write to its output pipeline, and thus your pipeline won't complete until tail exits. It won't do that until after your script exits, AND another line (or possibly 4K if buffering, see below) is written to the log file, causing it to attempt to write to its output pipe. (re buffering: Most programs are switched to 4K buffering when writing through pipes. Unless tail explicitly sets its buffering, this would affect the above behaviour).
your regex: "${line}" =~ ".*ERIKA.*" does not match for me. However, "${line}" =~ "ERIKA" does match.
You can use tail's --pid option as a solution to the first issue. Here's an example, reworking your script to use that option:
while read line; do
if [[ "${line}" =~ "ERIKA" ]]; then
echo "match found"
break
fi
done < <(tail --pid=$$ -f /tmp/out)
echo "Search done"
Glenn Jackman's pkill solution is another approach to terminating the tail.
Perhaps consider doing this in something other than bash: perl has a nice File::Tail module that implements the tail behaviour.
There are many more questions related to this problem, you may find something you prefer in their answers:
Ending tail -f started in a shell script
Do a tail -F until matching a pattern
https://superuser.com/questions/275827/how-to-read-one-line-from-tail-f-through-a-pipeline-and-then-terminate
https://unix.stackexchange.com/questions/45941/tail-f-until-text-is-seen
https://unix.stackexchange.com/questions/12075/best-way-to-follow-a-log-and-execute-a-command-when-some-text-appears-in-the-log?rq=1
Here's one way, doesn't feel very elegant though.
tail -fn0 nohup.out |
while IFS= read -r line; do
if [[ $line == *ERIKA* ]]; then
echo "match found"
pkill -P $$ tail
fi
done
echo "Search done"
You can use awk to exit:
tail -fn0 nohup.out | awk '/ERIKA/{print "match found ", $0; exit}'

Bash shell `if` command returns something `then` do something

I am trying to do an if/then statement, where if there is non-empty output from a ls | grep something command then I want to execute some statements. I am do not know the syntax I should be using. I have tried several variations of this:
if [[ `ls | grep log ` ]]; then echo "there are files of type log";
Well, that's close, but you need to finish the if with fi.
Also, if just runs a command and executes the conditional code if the command succeeds (exits with status code 0), which grep does only if it finds at least one match. So you don't need to check the output:
if ls | grep -q log; then echo "there are files of type log"; fi
If you're on a system with an older or non-GNU version of grep that doesn't support the -q ("quiet") option, you can achieve the same result by redirecting its output to /dev/null:
if ls | grep log >/dev/null; then echo "there are files of type log"; fi
But since ls also returns nonzero if it doesn't find a specified file, you can do the same thing without the grep at all, as in D.Shawley's answer:
if ls *log* >&/dev/null; then echo "there are files of type log"; fi
You also can do it using only the shell, without even ls, though it's a bit wordier:
for f in *log*; do
# even if there are no matching files, the body of this loop will run once
# with $f set to the literal string "*log*", so make sure there's really
# a file there:
if [ -e "$f" ]; then
echo "there are files of type log"
break
fi
done
As long as you're using bash specifically, you can set the nullglob option to simplify that somewhat:
shopt -s nullglob
for f in *log*; do
echo "There are files of type log"
break
done
Or without if; then; fi:
ls | grep -q log && echo 'there are files of type log'
Or even:
ls *log* &>/dev/null && echo 'there are files of type log'
The if built-in executes a shell command and selects the block based on the return value of the command. ls returns a distinct status code if it does not find the requested files so there is no need for the grep part. The [[ utility is actually a built-in command from bash, IIRC, that performs arithmetic operations. I could be wrong on that part since I rarely stray far from Bourne shell syntax.
Anyway, if you put all of this together, then you end up with the following command:
if ls *log* > /dev/null 2>&1
then
echo "there are files of type log"
fi

How to make a BASH script work only in a specific directory?

my Linux homework requires that I write a script that only runs if the user is in ~/tareas/sesion_3, so I assume he first needs to input cd /~/tareas/sesion_3 and then the script commands will run, if not it'll echo "you're not on /~/tareas/sesion_3". In the script I need to make more directories, and they can only be created in that location.
How can I make such condition?
I appreciate every bit of help you guys can offer!
You can use $PWD to see what parent directory the script was run from, although it will have expanded ~ already. So you can do something like:
if [[ "$PWD" == "/home/tareas/session_3" ]]; then
# do stuff if true
else
# do stuff if false
fi
my answer is:
#!/bin/sh
TARGET_DIR = "~/tareas/sesion_3"
function do_something(){
#do something
}
function do_something_v2(){
#create some dirs
}
if [ `pwd` == "$TARGET_DIR" ] ; then
do_something
else
do_something_v2
i hope it can help you
^_^
If you need to see if you are at least inside of the given directory, but perhaps in a child directory therein, grep is a good friend to have:
echo `pwd` | grep ^/starting/directory >/dev/null || {
echo "You aren't in the proper place .."
exit 1
}
Example of it working:
tpost#tpost-desktop:~$ echo `pwd` | grep ^/home/tpost >/dev/null || echo nope
tpost#tpost-desktop:~$ echo `pwd` | grep ^/home/foo >/dev/null || echo nope
nope
The carat (^) tells grep to match a line that starts with what you provide.

How to tail -f the latest log file with a given pattern

I work with some log system which creates a log file every hour, like follows:
SoftwareLog.2010-08-01-08
SoftwareLog.2010-08-01-09
SoftwareLog.2010-08-01-10
I'm trying to tail to follow the latest log file giving a pattern (e.g. SoftwareLog*) and I realize there's:
tail -F (tail --follow=name --retry)
but that only follow one specific name - and these have different names by date and hour. I tried something like:
tail --follow=name --retry SoftwareLog*(.om[1])
but the wildcard statement is resoved before it gets passed to tail and doesn't re-execute everytime tail retries.
Any suggestions?
I believe the simplest solution is as follows:
tail -f `ls -tr | tail -n 1`
Now, if your directory contains other log files like "SystemLog" and you only want the latest "SoftwareLog" file, then you would simply include a grep as follows:
tail -f `ls -tr | grep SoftwareLog | tail -n 1`
[Edit: after a quick googling for a tool]
You might want to try out multitail - http://www.vanheusden.com/multitail/
If you want to stick with Dennis Williamson's answer (and I've +1'ed him accordingly) here are the blanks filled in for you.
In your shell, run the following script (or it's zsh equivalent, I whipped this up in bash before I saw the zsh tag):
#!/bin/bash
TARGET_DIR="some/logfiles/"
SYMLINK_FILE="SoftwareLog.latest"
SYMLINK_PATH="$TARGET_DIR/$SYMLINK_FILE"
function getLastModifiedFile {
echo $(ls -t "$TARGET_DIR" | grep -v "$SYMLINK_FILE" | head -1)
}
function getCurrentlySymlinkedFile {
if [[ -h $SYMLINK_PATH ]]
then
echo $(ls -l $SYMLINK_PATH | awk '{print $NF}')
else
echo ""
fi
}
symlinkedFile=$(getCurrentlySymlinkedFile)
while true
do
sleep 10
lastModified=$(getLastModifiedFile)
if [[ $symlinkedFile != $lastModified ]]
then
ln -nsf $lastModified $SYMLINK_PATH
symlinkedFile=$lastModified
fi
done
Background that process using the normal method (again, I don't know zsh, so it might be different)...
./updateSymlink.sh 2>&1 > /dev/null
Then tail -F $SYMLINK_PATH so that the tail hands the changing of the symbolic link or a rotation of the file.
This is slightly convoluted, but I don't know of another way to do this with tail. If anyone else knows of a utility that handles this, then let them step forward because I'd love to see it myself too - applications like Jetty by default do logs this way and I always script up a symlinking script run on a cron to compensate for it.
[Edit: Removed an erroneous 'j' from the end of one of the lines. You also had a bad variable name "lastModifiedFile" didn't exist, the proper name that you set is "lastModified"]
I haven't tested this, but an approach that may work would be to run a background process that creates and updates a symlink to the latest log file and then you would tail -f (or tail -F) the symlink.
#!/bin/bash
PATTERN="$1"
# Try to make sure sub-shells exit when we do.
trap "kill -9 -- -$BASHPID" SIGINT SIGTERM EXIT
PID=0
OLD_FILES=""
while true; do
FILES="$(echo $PATTERN)"
if test "$FILES" != "$OLD_FILES"; then
if test "$PID" != "0"; then
kill $PID
PID=0
fi
if test "$FILES" != "$PATTERN" || test -f "$PATTERN"; then
tail --pid=$$ -n 0 -F $PATTERN &
PID=$!
fi
fi
OLD_FILES="$FILES"
sleep 1
done
Then run it as: tail.sh 'SoftwareLog*'
The script will lose some log lines if the logs are written to between checks. But at least it's a single script, with no symlinks required.
We have daily rotating log files as: /var/log/grails/customer-2020-01-03.log. To tail the latest one, the following command worked fine for me:
tail -f /var/log/grails/customer-`date +'%Y-%m-%d'`.log
(NOTE: no space after the + sign in the expression)
So, for you, the following should work (if you are in the same directory of the logs):
tail -f SoftwareLog.`date +'%Y-%m-%d-%H'`
I believe the easiest way is to use tail with ls and head, try something like this
tail -f `ls -t SoftwareLog* | head -1`

Resources