Bash : expression recursion level exceeded (error token is ...) - linux

I'm writing a program that prints the username and the number of times that the user has logged in, or prints "Unknown user" otherwise.
My code is the following:
iden=$1
c='last | grep -w -c $iden'
if (( $c > 1 ))
then
echo "$iden $c"
else
echo "Unknown user"
fi
And I keep getting this error:
-bash: ((: last | grep -w -c 123: expression recursion level exceeded (error token is "c 123")

To store the output of a command in a variable you need to say var=$(command). Hence, use:
c=$(last | grep -w -c "$iden") # always good to quote the variables
instead of
c='last | grep -w -c $iden'
If you are learning Bash scripting, it is always handy to paste your code in ShellCheck to see the problems you may have.

You can use this too:
c=`last | grep -w -c "$iden"`

Related

Bash oneliner with pipes and if condition giving error

I am trying to find the number of a particular process in bash using if condition as
if ps auwx | grep -v grep | grep -ic python -le 2; then echo error; else echo no_error; fi
and I am getting output as
grep: python: No such file or directory
no_error
The one-liner seems to break if I use pipe, and no error is thrown if I omit pipe, and it doesn't matter if I use the absolute path to grep either.I cannot get the required result without the pipe. What am I doing wrong here? I can get this done in a script file, by breaking it into variables and then doing comparing it, but I was using this as an exercise to learn bash. Any help is greatly appreciated.
First of all, the syntax of if command is:
if cmd; then
# cmd exited with status 0 (success)
else
# cmd exited with status >0 (fail)
fi
The cmd above is the so-called list - a sequence of pipelines. Each pipeline is a sequence of commands separated with |.
The -le operator is interpreted only by the test command (also known as [, or [[), not by the if command.
So, when you say:
if ps auwx | grep -v grep | grep -ic python -le 2; then ... fi
you actually call grep with arguments:
grep -ic python -le 2
And since -e is used to specify the search pattern, the argument python is interpreted as a filename of the file to search for pattern 2. That's why grep tells you it can't find file named python.
To test the output of a command pipeline in if, you can use the command substitution inside the [[/[/test (as the other answer suggests):
if [[ $(ps auwx | grep -v grep | grep -ic python) -le 2 ]]; then ... fi
or, within (( .. )), with implicit arithmetic comparisons:
if (( $(ps auwx | grep -v grep | grep -ic python) <= 2 )); then ... fi
using a command substitution in a condition
if [[ $(ps ...) -le 2 ]]; then

How to use return status value for grep?

Why isn't my command returning "0"?
grep 'Unable' check_error_output.txt && echo $? | tail -1
If I remove the 'echo $?' and use tail to get the last occurrence of 'Unable' in check_error_output.txt it returns correctly. If I remove the tail -1, or replace it the pipe with && it returns as expected.
What am I missing?
The following way achieves what you're wanting to do without the use of pipes or sub shells
grep -q 'Unable' check_error_output.txt && echo $?
The -q flag stands for quiet / silent
From the man pages:
Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also
see the -s or --no-messages option. (-q is specified by POSIX.)
This is still not fail safe since a "No such file or directory" error will still come up both ways.
I would instead suggest the following approach, since it will output either type of return values:
grep -q 'Unable' check_error_output.txt 2> /dev/null; echo $?
The main difference is that regardless of whether it fails or succeeds, you will still get the return code and error messages will be directed to /dev/null. Notice how I use ";" rather than "&&", making it echo either type of return value.
use process Substitution:
cat <(grep 'Unable' check_error_output.txt) <(echo $?) | tail -1
The simplest way to check the return value of any command in an if statement is: if cmd; then. For example:
if grep -q 'Unable' check_error_output.txt; then ...
I resolved this by adding brackets around the grep and $?
(grep 'Unable' check_error_output.txt && echo $?) | tail -1

Bash: if statement always succeeding

I have the following if statement to check if a service, newrelic-daemon in this case, is running...
if [ $(ps -ef | grep -v grep | grep newrelic-daemon | wc -l) > 0 ]; then
echo "New Relic is already running."
The problem is it's always returning as true, i.e. "New Relic is already running". Even though when I run the if condition separately...
ps -ef | grep -v grep | grep newrelic-daemon | wc -l
... it returns 0. I expect it to do nothing here as the value returned is =0 but my IF condition says >0.
Am I overlooking something here?
You are trying to do a numeric comparison in [...] with >. That doesn't work; to compare values as numbers, use -gt instead:
if [ "$(ps -ef | grep -v grep | grep -c newrelic-daemon)" -gt 0 ]; then
The quotation marks around the command expansion prevent a syntax error if something goes horribly wrong (e.g. $PATH set wrong and the shell can't find grep). Since you tagged this bash specifically, you could also just use [[...]] instead of [...] and do without the quotes.
As another Bash-specific option, you could use ((...)) instead of either form of square brackets. This version is more likely to generate a syntax error if anything goes wrong (as the arithmetic expression syntax really wants all arguments to be numbers), but it lets you use the more natural comparison operators:
if (( "$(ps -ef | grep -v grep | grep -c newrelic-daemon)" > 0 )); then
In both cases I used grep -c instead of grep | wc -l; that way I avoided an extra process and a bunch of interprocess I/O just so wc can count lines that grep is already enumerating.
But since you're just checking to see if there are any matches at all, you don't need to do either of those; the last grep will exit with a true status if it finds anything and false if it doesn't, so you can just do this:
if ps -ef | grep -v grep | grep -q newrelic-daemon; then
(The -q keeps grep from actually printing out the matching lines.)
Also, if the process name you're looking for is a literal string instead of a variable, my favorite trick for this task is to modify that string like this, instead of piping through an extra grep -v grep:
if ps -ef | grep -q 'newrelic[-]daemon'; then
You can pick any character to put the square brackets around; the point is to create a regular expression pattern that matches the target process name but doesn't match the pattern itself, so the grep process doesn't find its own ps line.
Finally, since you tagged this linux, note that most Linux distros ship with a combination ps + grep command called pgrep, which does this for you without your having to build a pipeline:
if pgrep newrelic-daemon >/dev/null; then
(The MacOS/BSD version of pgrep accepts a -q option like grep, which would let you do without the >/dev/null redirect, but the versions I've found on Linux systems don't seem to have that option.)
There's also pidof; I haven't yet encountered a system that had pidof without pgrep, but should you come across one, you can use it the same way:
if pidof newrelic-daemon >/dev/null; then
Other answers have given you more details. I would do what you are trying to do with:
if pidof newrelic-daemon >/dev/null; then
echo "New Relic is already running."
fi
or even
pidof newrelic-daemon >/dev/null && echo "New Relic is already running."
If you want to compare integers with test you have to use the -gt option. See:
man test
or
man [
#Stephen: Try(change [ to [[ into your code along with fi which will complete the if block completely):
if [[ $(ps -ef | grep -v grep | grep newrelic-daemon | wc -l) > 0 ]]; then
echo "New Relic is already running."
fi

Beginner's Bash Scripting: "Unary Operator Expected" error && how to use grep to search for a variable from the output of a command?

I'm attempting to write a bash script that is executed through "./filename username" and checks whether or not that user is logged in, printing the result. I'm still new to scripting and am having trouble understanding how to make this work.
I'm currently getting the error "line 7: [: ambonill: unary operator expected". What does that mean and how can I go about fixing that error?
Additionally, how would I get grep to work instead of sort | uniq? I'd like to grep for the variable from the output of the command but can't find anything related in the man page.
#! /bin/bash
# This script will take a username as an argument and determine whether they are logged on.
function loggedin {
for u in `who | cut -f1 -d" " | sort | uniq`
do
if [ $u == $1 ]
then
echo "$1 is logged on"
else
echo "$1 is not logged on"
fi
exit 0
done
}
loggedin $u
exit 1
Try to find a simpler solution, like:
#!/bin/bash
echo "$1 is $([ -z "$(w -h $1)" ]&&echo -n not\ )logged on"

Check for zero lines output from command over SSH

If I do the following, and the network is down, then the zero case will be executed, which it shouldn't.
case "$(ssh -n $host zfs list -t snapshot -o name -H | grep "tank/fs" | wc -l | awk '{print $1}')" in
0) # do something
;;
1) # do something else
;;
*) # fail
esac
Earlier in the script I check that I can SSH to $host, but today I found this problem, where the network failed right after my check.
If I check the return value from the SSH command, then I will always get the return value from awk as it is executed last.
Question
How do I insure that I actually count zero lines that zfs outputted, and not zero lines from a failed SSH connection?
Say:
set -o pipefail
at the beginning of your script (or before the case statement).
Moreover, check for the return code of the command before executing the case statement:
set -o pipefail
$value=$(ssh -n $host zfs list -t snapshot -o name -H | grep "tank/fs" | wc -l | awk '{print $1}')
if [ $? == 0 ]; then
case $value in
...
esac
fi
From the manual:
pipefail
If set, the return value of a pipeline is the value of the last
(rightmost) command to exit with a non-zero status, or zero if all
commands in the pipeline exit successfully. This option is disabled by
default.
How about this (commands after ssh shouldn't get executed locally):
"$(ssh -n $host 'zfs list -t snapshot -o name -H | grep "tank/fs" | wc -l | awk \'{print $1}\')'"
Note how I single quoted the command for ssh to run on the remote machine. Also escaped the ' for awk like \'. Check the return value of the call and only act when it returns success.

Resources