Redirect lsof exit code into variable - linux

I'm trying to test whether a file is open and then do something with the exit code. Currently doing it like this:
FILE=/usr/local/test.sh
lsof "$FILE" | grep -q COMMAND &>/dev/null
completed=$?
Is there any way you can push the exit code straight into a local variable rather than redirecting output to /dev/null and capturing the '$?' variable?

Well, you could do:
lsof "$FILE" | grep -q COMMAND; completed=$?
There's no need to redirect anything as grep -q is quiet anyways. If you want do certain action if the grep succeeds, just use && operator. Storing exit status in this case is probably unnecessary.
lsof "$FILE" | grep -q COMMAND && echo 'Command was found!'

Related

How to run all the scripts found by find

I'm trying to find all the init scripts created for websphere.
I know all the scripts end up with -init, so the first part of the code is:
find /etc/rc.d/init.d -name "*-init"
Also, I need all the script that run on an specific path, so the second part would be
| grep -i "/opt/ibm"
Finally, I need help with the last part. I have found the scripts I need to run them with the stop argument.
find /etc/rc.d/init.d -name "*-init" | grep -i "/opt/ibm" | <<run script found with stop argument>>
How can I run the command found with find?
Use a loop so that we are a little more careful while executing them:
#!/bin/bash
shopt -s globstar
for file in /etc/rc.d/init.d/**/*-init; do # grab all -init scripts
script=$(readlink -f "$file") # grab the actual file in case of a symlink
[[ -f $script ]] || continue # skip if not a regular file
[[ $file = */opt/ibm/* ]] || continue # not "/opt/ibm/", skip
printf '%s\n' "Executing script '$script'"
"$script" stop; exit_code=$?
printf '%s\n' "Script '$script' finished with exit_code $exit_code"
done
If you omit the 'find' and use grep directly you could do something like this:
grep -i "/opt/ibm" /etc/rc.d/init.d/* | sed 's/:.*/ stop/g' | sort -u | bash
it uses grep directly, which adds the filename to the output: filename:matched line
since you only need the filename and not the match, use sed to replace the ':' and the rest of the line with ' stop' (see the space before stop)
use sort -u (make sure, to execute each script only once)
Pipe the result into a shell

How to use return status value for grep?

Why isn't my command returning "0"?
grep 'Unable' check_error_output.txt && echo $? | tail -1
If I remove the 'echo $?' and use tail to get the last occurrence of 'Unable' in check_error_output.txt it returns correctly. If I remove the tail -1, or replace it the pipe with && it returns as expected.
What am I missing?
The following way achieves what you're wanting to do without the use of pipes or sub shells
grep -q 'Unable' check_error_output.txt && echo $?
The -q flag stands for quiet / silent
From the man pages:
Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also
see the -s or --no-messages option. (-q is specified by POSIX.)
This is still not fail safe since a "No such file or directory" error will still come up both ways.
I would instead suggest the following approach, since it will output either type of return values:
grep -q 'Unable' check_error_output.txt 2> /dev/null; echo $?
The main difference is that regardless of whether it fails or succeeds, you will still get the return code and error messages will be directed to /dev/null. Notice how I use ";" rather than "&&", making it echo either type of return value.
use process Substitution:
cat <(grep 'Unable' check_error_output.txt) <(echo $?) | tail -1
The simplest way to check the return value of any command in an if statement is: if cmd; then. For example:
if grep -q 'Unable' check_error_output.txt; then ...
I resolved this by adding brackets around the grep and $?
(grep 'Unable' check_error_output.txt && echo $?) | tail -1

Bash Script output is always 'ps' when piping to grep from ps regardless of PID results

given an array of pids and the code:
for i in ${listedPids[#]}
do
runningCheck="ps -u $USER | grep $i"
grepRes=(${runningCheck})
if [[ -n $grepRes ]]
then
echo $grepRes
echo $runningCheck
... code not related to the issue
fi
done
Regardless if those pids are active or not; I keep getting 'ps' from echo $grepRes while the output of echo $runningCheck shows up with the correct user name and pid. What am I missing?
Replace
"ps -u $USER | grep $i"
by
$(ps -u $USER | grep $i)
Command Substitution: Bash performs the expansion by executing your command and replacing the command substitution with the standard output of the
command, with any trailing newlines deleted.
I simplified your script and here's what it should look like.
for i in "${listedPids[#]}"
do
grepRes=$(ps --no-heading -p $i)
if [[ -n "$grepRes" ]]
then
echo "$grepRes"
... code not related to the issue
fi
done
An even shorter code could be written using while loop.
ps --noheading -p "${listedPids[#]}" | while read grepRes
do
echo "$grepRes"
... code not related to the issue
done
As alvits and l0b0 pointed out, I made a few syntax errors: grepRes=(${runningCheck}) when I just wanted to execute that line and not turn it to a list, and the fact pipes and redirects don't work in variables. In the end pgrep did the job as I just needed to continue looping till all the background processes ended.
Maybe you could try eval.
runningCheck1="ps -u $USER"
runningCheck2=" | grep $i"
echo $runningCheck1$runningCheck
eval $runningCheck1$runningCheck2

Bash shell `if` command returns something `then` do something

I am trying to do an if/then statement, where if there is non-empty output from a ls | grep something command then I want to execute some statements. I am do not know the syntax I should be using. I have tried several variations of this:
if [[ `ls | grep log ` ]]; then echo "there are files of type log";
Well, that's close, but you need to finish the if with fi.
Also, if just runs a command and executes the conditional code if the command succeeds (exits with status code 0), which grep does only if it finds at least one match. So you don't need to check the output:
if ls | grep -q log; then echo "there are files of type log"; fi
If you're on a system with an older or non-GNU version of grep that doesn't support the -q ("quiet") option, you can achieve the same result by redirecting its output to /dev/null:
if ls | grep log >/dev/null; then echo "there are files of type log"; fi
But since ls also returns nonzero if it doesn't find a specified file, you can do the same thing without the grep at all, as in D.Shawley's answer:
if ls *log* >&/dev/null; then echo "there are files of type log"; fi
You also can do it using only the shell, without even ls, though it's a bit wordier:
for f in *log*; do
# even if there are no matching files, the body of this loop will run once
# with $f set to the literal string "*log*", so make sure there's really
# a file there:
if [ -e "$f" ]; then
echo "there are files of type log"
break
fi
done
As long as you're using bash specifically, you can set the nullglob option to simplify that somewhat:
shopt -s nullglob
for f in *log*; do
echo "There are files of type log"
break
done
Or without if; then; fi:
ls | grep -q log && echo 'there are files of type log'
Or even:
ls *log* &>/dev/null && echo 'there are files of type log'
The if built-in executes a shell command and selects the block based on the return value of the command. ls returns a distinct status code if it does not find the requested files so there is no need for the grep part. The [[ utility is actually a built-in command from bash, IIRC, that performs arithmetic operations. I could be wrong on that part since I rarely stray far from Bourne shell syntax.
Anyway, if you put all of this together, then you end up with the following command:
if ls *log* > /dev/null 2>&1
then
echo "there are files of type log"
fi

How to tail -f the latest log file with a given pattern

I work with some log system which creates a log file every hour, like follows:
SoftwareLog.2010-08-01-08
SoftwareLog.2010-08-01-09
SoftwareLog.2010-08-01-10
I'm trying to tail to follow the latest log file giving a pattern (e.g. SoftwareLog*) and I realize there's:
tail -F (tail --follow=name --retry)
but that only follow one specific name - and these have different names by date and hour. I tried something like:
tail --follow=name --retry SoftwareLog*(.om[1])
but the wildcard statement is resoved before it gets passed to tail and doesn't re-execute everytime tail retries.
Any suggestions?
I believe the simplest solution is as follows:
tail -f `ls -tr | tail -n 1`
Now, if your directory contains other log files like "SystemLog" and you only want the latest "SoftwareLog" file, then you would simply include a grep as follows:
tail -f `ls -tr | grep SoftwareLog | tail -n 1`
[Edit: after a quick googling for a tool]
You might want to try out multitail - http://www.vanheusden.com/multitail/
If you want to stick with Dennis Williamson's answer (and I've +1'ed him accordingly) here are the blanks filled in for you.
In your shell, run the following script (or it's zsh equivalent, I whipped this up in bash before I saw the zsh tag):
#!/bin/bash
TARGET_DIR="some/logfiles/"
SYMLINK_FILE="SoftwareLog.latest"
SYMLINK_PATH="$TARGET_DIR/$SYMLINK_FILE"
function getLastModifiedFile {
echo $(ls -t "$TARGET_DIR" | grep -v "$SYMLINK_FILE" | head -1)
}
function getCurrentlySymlinkedFile {
if [[ -h $SYMLINK_PATH ]]
then
echo $(ls -l $SYMLINK_PATH | awk '{print $NF}')
else
echo ""
fi
}
symlinkedFile=$(getCurrentlySymlinkedFile)
while true
do
sleep 10
lastModified=$(getLastModifiedFile)
if [[ $symlinkedFile != $lastModified ]]
then
ln -nsf $lastModified $SYMLINK_PATH
symlinkedFile=$lastModified
fi
done
Background that process using the normal method (again, I don't know zsh, so it might be different)...
./updateSymlink.sh 2>&1 > /dev/null
Then tail -F $SYMLINK_PATH so that the tail hands the changing of the symbolic link or a rotation of the file.
This is slightly convoluted, but I don't know of another way to do this with tail. If anyone else knows of a utility that handles this, then let them step forward because I'd love to see it myself too - applications like Jetty by default do logs this way and I always script up a symlinking script run on a cron to compensate for it.
[Edit: Removed an erroneous 'j' from the end of one of the lines. You also had a bad variable name "lastModifiedFile" didn't exist, the proper name that you set is "lastModified"]
I haven't tested this, but an approach that may work would be to run a background process that creates and updates a symlink to the latest log file and then you would tail -f (or tail -F) the symlink.
#!/bin/bash
PATTERN="$1"
# Try to make sure sub-shells exit when we do.
trap "kill -9 -- -$BASHPID" SIGINT SIGTERM EXIT
PID=0
OLD_FILES=""
while true; do
FILES="$(echo $PATTERN)"
if test "$FILES" != "$OLD_FILES"; then
if test "$PID" != "0"; then
kill $PID
PID=0
fi
if test "$FILES" != "$PATTERN" || test -f "$PATTERN"; then
tail --pid=$$ -n 0 -F $PATTERN &
PID=$!
fi
fi
OLD_FILES="$FILES"
sleep 1
done
Then run it as: tail.sh 'SoftwareLog*'
The script will lose some log lines if the logs are written to between checks. But at least it's a single script, with no symlinks required.
We have daily rotating log files as: /var/log/grails/customer-2020-01-03.log. To tail the latest one, the following command worked fine for me:
tail -f /var/log/grails/customer-`date +'%Y-%m-%d'`.log
(NOTE: no space after the + sign in the expression)
So, for you, the following should work (if you are in the same directory of the logs):
tail -f SoftwareLog.`date +'%Y-%m-%d-%H'`
I believe the easiest way is to use tail with ls and head, try something like this
tail -f `ls -t SoftwareLog* | head -1`

Resources