How to use Linux pipe for Userdefined Shell Scripts - linux

I have two user defined shell scripts:
First is Add
if [ $# -eq 3 ]
then
sum=`expr $1 + $2 `
echo $sum
else
echo "usage :$0 num1 num2"
echo "num1 and num2 are two numbers"
exit 1
fi
The next is Square
echo `expr $1 \* $1`
Can any one pleae tell me how to use Linux pipe for these shell scripts. I tried something like:
add 10 20 | square
But it is giving me the list of files in that directory.

Using a pipe will pass the output from the first command to the stdin of the second command. You want the output to be used as arguments for the second command instead. Try xargs:
add 10 20 | xargs square
Of course you have to make sure that the output of the first command is just "10" in this case.
A little more explanation: a pipe will take the output of the first command and redirect it into the standard input stream of the second command. That means you will have to use a command like "read" (as some of the other answers do) to use the information from the input stream.
But your square script doesn't read anything from the standard input: it takes an argument instead. So we want to take the output of your first command (10) and use it as the argument for your second command. The "xargs" utility does exactly that: the standard input it receives will be passed as arguments to the square command. See https://en.wikipedia.org/wiki/Xargs.
By the way, command substitution has the same effect:
square $(add 10 20)
The syntax $(add 10 20) will run the add script and replace the expression with its output. So after running the add script the line looks like this:
square 30
And, in effect, we have again turned the output from add into an argument for square.

As written, you want to use command substitution, not a pipe (since square takes command line arguments, rather than reading from standard input):
square $(add 10 20)
To modify square so that add 10 20 | square works, use the read builtin:
#!/bin/bash
read input
echo $(( $input * $input )) # No need for the external expr command
add should also write any error messages to standard error, not standard output:
if [ $# -eq 2 ]
then
sum=$(( $1 + $2 ))
echo $sum
else
echo "usage :$0 num1 num2" >&2
echo "num1 and num2 are two numbers" >&2
exit 1
fi

You can use the "read" command to read the value from STDIN, if no parameters are specified:
val="$1"
test -z "$1" && read val
echo `expr $val \* $val`

Related

Shell Script working with multiple files [duplicate]

This question already has answers here:
How to iterate over arguments in a Bash script
(9 answers)
Closed 5 years ago.
I have this code below:
#!/bin/bash
filename=$1
file_extension=$( echo $1 | cut -d. -f2 )
directory=${filename%.*}
if [[ -z $filename ]]; then
echo "You forgot to include the file name, like this:"
echo "./convert-pdf.sh my_document.pdf"
else
if [[ $file_extension = 'pdf' ]]; then
[[ ! -d $directory ]] && mkdir $directory
convert $filename -density 300 $directory/page_%04d.jpg
else
echo "ERROR! You must use ONLY PDF files!"
fi
fi
And it is working perfectly well!
I would like to create a script which I can do something like this: ./script.sh *.pdf
How can I do it? Using asterisk.
Thank you for your time!
Firstly realize that the shell will expand *.pdf to a list of arguments. This means that your shell script will never ever see the *. Instead it will get a list of arguments.
You can use a construction like the following:
#!/bin/bash
function convert() {
local filename=$1
# do your thing here
}
if (( $# < 1 )); then
# give your error message about missing arguments
fi
while (( $# > 0 )); do
convert "$1"
shift
done
What this does is first wrap your functionality in a function called convert. Then for the main code it first checks the number of arguments passed to the script, if this is less than 1 (i.e. none) you give the error that a filename should be passed. Then you go into a while loop which is executed as long as there are arguments remaining. The first argument you pass to the convert function which does what your script already does. Then the shift operation is performed, what this does is it throws away the first argument and then shifts all the remaining arguments "left" by one place, that is what was $2 now is $1, what was $3 now is $2, etc. By doing this in the while loop until the argument list is empty you go through all the arguments.
By the way, your initial assignments have a few issues:
you can't assume that the filename has an extension, your code could match a dot in some directory path instead.
your directory assignment seems to be splitting on . instead of /
your directory assignment will contain the filename if no absolute or relative path was given, i.e. only a bare filename
...
I think you should spend a bit more time on robustness
Wrap your code in a loop. That is, instead of:
filename=$1
: code goes here
use:
for filename in "$#"; do
: put your code here
done

I am running into a No such file or directory error in bash, but it doesn't seem to be failing on a file

I have been running through my code for a while, and can't seem to find the reason this is failing because it is failing on line 10 apparently which is the if statement, but it is correctly finding the value of line.
#!/bin/bash
#a script that reads the largest number from a file
file="$1"
largest=""
while IFS= read -r line
do
if("$line" > "$largest")
then
"$largest"="$line"
fi
done <"$file"
echo "$largest"
This is incorrect:
if("$line" > "$largest")
then
"$largest"="$line"
fi
Change to:
if [ "$line" -gt "$largest" ]
then
largest="$line"
fi
First, as the pointed out in the comment, > is a redirection operator, and bash is trying to run the "$line" command. Parentheses are not test operators, the square brackets are.
Finally, the "$largest" is incorrect as the target of an assignment. The $ tells bash to provide the value of the variable, and we want to assign to largest, not to the VALUE of largest.

"test: too many arguments" message because of special character * while using test command on bash to compare two strings [duplicate]

This question already has answers here:
Meaning of "[: too many arguments" error from if [] (square brackets)
(6 answers)
Closed 5 years ago.
I'm new to shell scripting and I'm having some trouble while using the "test" command and the special character * to compare two strings.
I have to write a shell script which, for every element(both files and directories) contained in the directory passed as the first argument, has to write something(for the solving of my problem it is not relevant to know what has to be written down) on the file "summary.out". What's more, there's a string passed as the second argument. Those files/directories beginning with this string must be ignored(nothing has to be written on summary.out).
Here is the code:
#!/bin/bash
TEMP=NULL
cd "$1"
for i in *
do
if test "$i" != "$2"*;then #Here is where the error comes from
if test -f "$i";then
TEMP="$i - `head -c 10 "$i"`"
elif test -d "$i";then
TEMP="$i - `ls -1 "$i" | wc -l`"
fi
echo $TEMP >> summary.out
fi
done
The error comes from the test which checks whether the current file/directory begins with the string passed as second argument, and it takes place every iteration of the for cycle. It states:"test: too many arguments"
Now, I have performed some tests which showed that the problem has nothing to do with blank spaces inside the $i or $1. The problem is linked to the fact that I use the special character * in the test(if I remove it, everything works fine).
Why can't "test" handle * ? What can I do to fix that?
* gets expanded by the shell.
In bash, you can use [[ ... ]] for conditions instead of test. They support patterns on the right hand side - * is not expanded, as double square brackets are a keyword with higher precedence.
if [[ a == * ]] ; then
echo Matches
else
echo Doesn\'t match
fi

Reading a parameter through command prompt (starts with "-")

I am facing a problem while reading input from command prompt in shell script. My script's name is status.ksh, and I have to take parameter from command prompt. This script accepts 2 parameter. 1st is "-e" and second is "server_name".
When I am running the script like this,
status.ksh -e server_name
echo $#
is giving output "server_name" only, where as expected output should be "-e server_name"
and echo $1 is giving output as NULL, where as expected output should be "-e".
Please guide me, how to read get the 1st parameter, which is "-e" .
Thanks & Regards
The problem was caused by -e. This is a flag for echo.
-e enable interpretation of backslash escapes
Most of the unix commands allow -- to be used to separate flags and the rest of the arguments, but echo doesn't support this, so you need another command:
printf "%s\n" "$1"
If you need complex command line argument parsing, definitely go with getopts as Joe suggested.
Have you read this reference? http://www.lehman.cuny.edu/cgi-bin/man-cgi?getopts+1
You shouldn't use $1, $2, $#, etc to parse options. There are builtins that can handle this for you.
Example 2 Processing Arguments for a Command with Options
The following fragment of a shell program processes the
arguments for a command that can take the options -a or -b.
It also processes the option -o, which requires an option-
argument:
while getopts abo: c
do
case $c in
a | b) FLAG=$c;;
o) OARG=$OPTARG;;
\?) echo $USAGE
exit 2;;
esac
done
shift `expr $OPTIND - 1`
More examples:
http://linux-training.be/files/books/html/fun/ch21s05.html
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds2/getopts.htm
http://www.livefirelabs.com/unix_tip_trick_shell_script/may_2003/05262003.htm

Read values into a shell variable from a pipe

I am trying to get bash to process data from stdin that gets piped into, but no luck. What I mean is none of the following work:
echo "hello world" | test=($(< /dev/stdin)); echo test=$test
test=
echo "hello world" | read test; echo test=$test
test=
echo "hello world" | test=`cat`; echo test=$test
test=
where I want the output to be test=hello world. I've tried putting "" quotes around "$test" that doesn't work either.
Use
IFS= read var << EOF
$(foo)
EOF
You can trick read into accepting from a pipe like this:
echo "hello world" | { read test; echo test=$test; }
or even write a function like this:
read_from_pipe() { read "$#" <&0; }
But there's no point - your variable assignments may not last! A pipeline may spawn a subshell, where the environment is inherited by value, not by reference. This is why read doesn't bother with input from a pipe - it's undefined.
FYI, http://www.etalabs.net/sh_tricks.html is a nifty collection of the cruft necessary to fight the oddities and incompatibilities of bourne shells, sh.
if you want to read in lots of data and work on each line separately you could use something like this:
cat myFile | while read x ; do echo $x ; done
if you want to split the lines up into multiple words you can use multiple variables in place of x like this:
cat myFile | while read x y ; do echo $y $x ; done
alternatively:
while read x y ; do echo $y $x ; done < myFile
But as soon as you start to want to do anything really clever with this sort of thing you're better going for some scripting language like perl where you could try something like this:
perl -ane 'print "$F[0]\n"' < myFile
There's a fairly steep learning curve with perl (or I guess any of these languages) but you'll find it a lot easier in the long run if you want to do anything but the simplest of scripts. I'd recommend the Perl Cookbook and, of course, The Perl Programming Language by Larry Wall et al.
This is another option
$ read test < <(echo hello world)
$ echo $test
hello world
read won't read from a pipe (or possibly the result is lost because the pipe creates a subshell). You can, however, use a here string in Bash:
$ read a b c <<< $(echo 1 2 3)
$ echo $a $b $c
1 2 3
But see #chepner's answer for information about lastpipe.
I'm no expert in Bash, but I wonder why this hasn't been proposed:
stdin=$(cat)
echo "$stdin"
One-liner proof that it works for me:
$ fortune | eval 'stdin=$(cat); echo "$stdin"'
bash 4.2 introduces the lastpipe option, which allows your code to work as written, by executing the last command in a pipeline in the current shell, rather than a subshell.
shopt -s lastpipe
echo "hello world" | read test; echo test=$test
A smart script that can both read data from PIPE and command line arguments:
#!/bin/bash
if [[ -p /dev/stdin ]]
then
PIPE=$(cat -)
echo "PIPE=$PIPE"
fi
echo "ARGS=$#"
Output:
$ bash test arg1 arg2
ARGS=arg1 arg2
$ echo pipe_data1 | bash test arg1 arg2
PIPE=pipe_data1
ARGS=arg1 arg2
Explanation: When a script receives any data via pipe, then the /dev/stdin (or /proc/self/fd/0) will be a symlink to a pipe.
/proc/self/fd/0 -> pipe:[155938]
If not, it will point to the current terminal:
/proc/self/fd/0 -> /dev/pts/5
The bash [[ -p option can check it it is a pipe or not.
cat - reads the from stdin.
If we use cat - when there is no stdin, it will wait forever, that is why we put it inside the if condition.
The syntax for an implicit pipe from a shell command into a bash variable is
var=$(command)
or
var=`command`
In your examples, you are piping data to an assignment statement, which does not expect any input.
In my eyes the best way to read from stdin in bash is the following one, which also lets you work on the lines before the input ends:
while read LINE; do
echo $LINE
done < /dev/stdin
The first attempt was pretty close. This variation should work:
echo "hello world" | { test=$(< /dev/stdin); echo "test=$test"; };
and the output is:
test=hello world
You need braces after the pipe to enclose the assignment to test and the echo.
Without the braces, the assignment to test (after the pipe) is in one shell, and the echo "test=$test" is in a separate shell which doesn't know about that assignment. That's why you were getting "test=" in the output instead of "test=hello world".
Because I fall for it, I would like to drop a note.
I found this thread, because I have to rewrite an old sh script
to be POSIX compatible.
This basically means to circumvent the pipe/subshell problem introduced by POSIX by rewriting code like this:
some_command | read a b c
into:
read a b c << EOF
$(some_command)
EOF
And code like this:
some_command |
while read a b c; do
# something
done
into:
while read a b c; do
# something
done << EOF
$(some_command)
EOF
But the latter does not behave the same on empty input.
With the old notation the while loop is not entered on empty input,
but in POSIX notation it is!
I think it's due to the newline before EOF,
which cannot be ommitted.
The POSIX code which behaves more like the old notation
looks like this:
while read a b c; do
case $a in ("") break; esac
# something
done << EOF
$(some_command)
EOF
In most cases this should be good enough.
But unfortunately this still behaves not exactly like the old notation
if some_command prints an empty line.
In the old notation the while body is executed
and in POSIX notation we break in front of the body.
An approach to fix this might look like this:
while read a b c; do
case $a in ("something_guaranteed_not_to_be_printed_by_some_command") break; esac
# something
done << EOF
$(some_command)
echo "something_guaranteed_not_to_be_printed_by_some_command"
EOF
Piping something into an expression involving an assignment doesn't behave like that.
Instead, try:
test=$(echo "hello world"); echo test=$test
The following code:
echo "hello world" | ( test=($(< /dev/stdin)); echo test=$test )
will work too, but it will open another new sub-shell after the pipe, where
echo "hello world" | { test=($(< /dev/stdin)); echo test=$test; }
won't.
I had to disable job control to make use of chepnars' method (I was running this command from terminal):
set +m;shopt -s lastpipe
echo "hello world" | read test; echo test=$test
echo "hello world" | test="$(</dev/stdin)"; echo test=$test
Bash Manual says:
lastpipe
If set, and job control is not active, the shell runs the last command
of a pipeline not executed in the background in the current shell
environment.
Note: job control is turned off by default in a non-interactive shell and thus you don't need the set +m inside a script.
I think you were trying to write a shell script which could take input from stdin.
but while you are trying it to do it inline, you got lost trying to create that test= variable.
I think it does not make much sense to do it inline, and that's why it does not work the way you expect.
I was trying to reduce
$( ... | head -n $X | tail -n 1 )
to get a specific line from various input.
so I could type...
cat program_file.c | line 34
so I need a small shell program able to read from stdin. like you do.
22:14 ~ $ cat ~/bin/line
#!/bin/sh
if [ $# -ne 1 ]; then echo enter a line number to display; exit; fi
cat | head -n $1 | tail -n 1
22:16 ~ $
there you go.
The questions is how to catch output from a command to save in variable(s) for use later in a script. I might repeat some earlier answers but I try to line up all the answers I can think up to compare and comment, so bear with me.
The intuitive construct
echo test | read x
echo x=$x
is valid in Korn shell because ksh have implemented that the last command in a piped series is part of the current shell ie. the previous pipe commands are subshells. In contrast other shells define all piped commands as subshells including the last.
This is the exact reason I prefer ksh.
But having to copy with other shells, bash f.ex., another construct must be used.
To catch 1 value this construct is viable:
x=$(echo test)
echo x=$x
But that only caters for 1 value to be collected for later use.
To catch more values this construct is useful and works in bash and ksh:
read x y <<< $(echo test again)
echo x=$x y=$y
There is a variant which I have noticed work in bash but not in ksh:
read x y < <(echo test again)
echo x=$x y=$y
The <<< $(...) is a here-document variant which gives all the meta handling of a standard command line. < <(...) is an input redirection of a file-substitution operator.
I use "<<< $(" in all my scripts now because it seems the most portable construct between shell variants. I have a tools set I carry around on jobs in any Unix flavor.
Of course there is the universally viable but crude solution:
command-1 | {command-2; echo "x=test; y=again" > file.tmp; chmod 700 file.tmp}
. ./file.tmp
rm file.tmp
echo x=$x y=$y
I wanted something similar - a function that parses a string that can be passed as a parameter or piped.
I came up with a solution as below (works as #!/bin/sh and as #!/bin/bash)
#!/bin/sh
set -eu
my_func() {
local content=""
# if the first param is an empty string or is not set
if [ -z ${1+x} ]; then
# read content from a pipe if passed or from a user input if not passed
while read line; do content="${content}$line"; done < /dev/stdin
# first param was set (it may be an empty string)
else
content="$1"
fi
echo "Content: '$content'";
}
printf "0. $(my_func "")\n"
printf "1. $(my_func "one")\n"
printf "2. $(echo "two" | my_func)\n"
printf "3. $(my_func)\n"
printf "End\n"
Outputs:
0. Content: ''
1. Content: 'one'
2. Content: 'two'
typed text
3. Content: 'typed text'
End
For the last case (3.) you need to type, hit enter and CTRL+D to end the input.
How about this:
echo "hello world" | echo test=$(cat)

Resources