Bash subshell consumes stdin of the parent process - linux

Let's say I have a main.sh script that will be calling one.sh via a subshell.
one.sh:
#! /bin/bash
set -euo pipefail
(if [ -t 0 ]; then
echo "one little two little three little buses"
else
cat -
fi) | awk '{ $1 = "111"; print $0 }'
main.sh:
#! /bin/bash
set -euo pipefail
main() {
echo "one_result) $(./one.sh)"
echo "one_piped) $(echo "the quick brown fox" | ./one.sh)"
}
main
Now, each of them works as expected:
$ ./one.sh
111 little two little three little buses
$ ./main.sh
one_result) 111 little two little three little buses
one_piped) 111 quick brown fox
But, when I pipe something to main.sh, I was not expecting (or, rather, I don't want) one.sh to know about the piped content, because I thought one.sh was in its own subshell in one_result):
$ echo "HELLO WORLD MAIN" | ./main.sh
one_result) 111 WORLD MAIN
one_piped) 111 quick brown fox
Is it the case my if condition in one.sh is not what I want? I would like one.sh to not create any side-effects of consuming my main.sh's stdin - since now it has consumed it, my main.sh is now effectively stdin-less, as stdin can only be read once unless I store it away.
Thoughts? TIA.

In general, subshells (and other processes that a shell spawns) inherit stdin from the parent shell. If that's the terminal, your test will work as you expect; if it's a pipe then it will detect that it's a pipe and go ahead and consume it. There's no way for the subshell to tell whether it got that pipe by having it explicitly assigned (as in echo "the quick brown fox" | ./one.sh) or via inheritance.
As far as I can see, the only way to avoid this problem is to explicitly redirect one.sh's input to something other than a pipe to avoid it inheriting the parent shell's stdin (which might be a pipe). Something like:
echo "one_nonpipe) $(./one.sh </dev/null)"
echo "one_piped) $(echo "the quick brown fox" | ./one.sh)"
... but what'd be even better would be to add a flag to tell one.sh whether to read from stdin or not, rather than trying to figure it out from the type of file attached to stdin:
#! /bin/bash
# Usage: one.sh [-i]
# -i Read from stdin
set -euo pipefail
if [ "$1" = "-i" ]; then
cat -
else
echo "one little two little three little buses"
fi | awk '{ $1 = "111"; print $0 }'
...
echo "one_result) $(./one.sh)"
echo "one_piped) $(echo "the quick brown fox" | ./one.sh -i)"
(Note that I also removed the unnecessary parentheses around the if block -- they created another level of subshell for no good reason.)

By default, every process inherits its standard input (and output and error) from its parent. Input redirection and pipes are two ways to change standard input to a different file descriptor when starting the child process.
It is the responsibility of main.sh, if needs to read from its standard input, to know that one.sh also reads from standard input, and it will need to prevent one.sh from consuming it.

Related

File redirection fails in Bash script, but not Bash terminal

I am having a problem where cmd1 works, but not cmd2 in my Bash script ending in .sh. I have made the Bash script executable.
Additionally, I can execute cmd2 just fine from my Bash terminal. I have tried to make a minimally reproducible example, but my larger goal is to run a complicated executable with command line arguments and pass output to a file that may or may not exist (rather than displaying the output in the terminal).
Replacing > with >> also gives the same error in the script, but not the terminal.
My Bash script:
#!/bin/bash
cmd1="cat test.txt"
cmd2="cat test.txt > a"
echo $cmd1
$cmd1
echo $cmd2
$cmd2
test.txt has the words "dog" and "cat" on two separate lines without quotes.
Short answer: see BashFAQ #50: I'm trying to put a command in a variable, but the complex cases always fail!.
Long answer: the shell expands variable references (like $cmd1) toward the end of the process of parsing a command line, after it's done parsing redirects (like > a is supposed to be) and quotes and escapes and... In fact, the only thing it does with the expanded value is word splitting (e.g. treating cat test.txt > a as "cat" followed by "test.txt", ">", and finally "a", rather than a single string) and wildcard expansion (e.g. if $cmd expanded to cat *.txt, it'd replace the *.txt part with a list of matching files). (And it skips word splitting and wildcard expansion if the variable is in double-quotes.)
Partly as a result of this, the best way to store commands in variables is: don't. That's not what they're for; variables are for data, not commands. What you should do instead, though, depends on why you were storing the command in a variable.
If there's no real reason to store the command in a variable, then just use the command directly. For conditional redirects, just use a standard if statement:
if [ -f a ]; then
cat test.txt > a
else
cat test.txt
fi
If you need to define the command at one point, and use it later; or want to use the same command over and over without having to write it out in full each time, use a function:
cmd2() {
cat test.txt > a
}
cmd2
It sounds like you may need to be able to define the command differently depending on some condition, you can actually do that with a function as well:
if [ -f a ]; then
cmd() {
cat test.txt > a
}
else
cmd() {
cat test.txt
}
fi
cmd
Alternately, you can wrap the command (without redirect) in a function, then use a conditional to control whether it redirects:
cmd() {
cat test.txt
}
if [ -f a ]; then
cmd > a
else
cmd
fi
It's also possible to wrap a conditional redirect into a function itself, then pipe output to it:
maybe_redirect_to() {
if [ -f "$1" ]; then
cat > "$1"
else
cat
fi
}
cat test.txt | maybe_redirect_to a
(This creates an extra cat process that isn't really doing anything useful, but if it makes the script cleaner, I'd consider that worth it. In this particular case, you could minimize the stray cats by using maybe_redirect_to a < test.txt.)
As a last resort, you can store the command string in a variable, and use eval to parse it. eval basically re-runs the shell parsing process from the beginning, meaning that it'll recognize things like redirects in the string. But eval has a well-deserved reputation as a bug magnet, because it's easy for it to treat parts of the string you thought were just data as command syntax, which can cause some really weird (& dangerous) bugs.
If you must use eval, at least double-quote the variable reference, so it runs through the parsing process just once, rather than sort-of-once-and-a-half as it would unquoted. Here's an example of what I mean:
cmd3="echo '5 * 3 = 15'"
eval "$cmd3"
# prints: 5 * 3 = 15
eval $cmd3
# prints: 5 [list of files in the current directory] 3 = 15
# ...unless there are any files with shell metacharacters in their names, in
# which case something more complicated might happen.
BashFAQ #50 discusses some other possible reasons and solutions. Note that the array approach will not work here, since arrays also get expanded after redirects are parsed.
If you pop an 'eval' in front of $cmd2 it should work as expected:
#!/bin/bash
cmd2="cat test.txt > a"
eval $cmd2
If you're not sure about the operation of a script you could always use the debug mode to see if you can determine the error.
bash -x scriptname
This will run the command and display the output of variable evaluations. Hopefully this will reveal any issues with syntax.

Why would someone use echo to assign values to variables in bash or ksh?

Recently I came across an unusual use of echo to assign variables in a client's ksh scripts.
For example, there are many instances such as the following
a='something'
b='else'
c=`echo "${a} ${b}"`
I have been unable to come up with any reason why someone may have done this.
Could there be some legacy reason for this?
(I've been doing shell for 30+ years, and never before have I seen this)
Or is it just ignorance?
There is no compelling reason whatsoever for this, either in current bash, or its POSIX sh or Bourne predecessors.
c="$a $b"
...is a complete replacement for the code given, and runs far faster (try putting it in a loop; command substitutions, as created by backticks, fork off a new shell as a subprocess and read its stdout -- a high-overhead operation).
What you saw is an example of bad use of echo because c could be declared as:
c="$a $b"
A common use of echo is when you need comands to filter output, for example
$ line="100090 $100,00 Mary"
$ name=`echo "$line" | grep -Eo "[a-zA-Z]+$"`
echo $name
Mary
But it would be more efficient if you don't use echo at all. The same thing above can be done with "read", without creating a new process:
$ line="100090 $100,00 Paul"
$ read -r _ _ name _ <<<"$line"
echo $name
Paul

As with the command: "echo '#!/bin/bash' |tee file", but with "echo '#!/bin/bash' | myscript file"

What "... | tee file" does is take stdin (standard input) and divert it to two places: stdout (standard output) and to a path/file named "file". In effect it does this, as far as I can judge:
#!/bin/bash
var=(cat) # same as var=(cat /dev/stdin)
echo -e "$var"
for file in "$#"
do
echo -e "$var" > "${file}"
done
exit 0
So I use the above code to create tee1 to see if I could emulate what tee does. But my real intent is to write a modified version that appends to existing file(s) rather than redo them from scratch. I call this one tee2:
#!/bin/bash
var=(cat) # same as var=(cat /dev/stdin)
echo -e "$var"
for file in "$#"
do
echo -e "$var" >> "${file}"
done
exit 0
It makes sense to me, but not to bash. Now an alternative approach is to do something like this:
echo -e "$var"
for file in "$#"
do
echo -e "$var"| tee tmpfile
cat tmpfile >> "${file}"
done
rm tmpfile
exit 0
It also makes sense to me to do this:
#!/bin/bash
cp -rfp /dev/stdin tmpfile
cat tmpfile
for file in "$#"
do
cat tmpfile >> "${file}"
done
exit 0
Or this:
#!/bin/bash
cat /dev/stdin
for file in "$#"
do
cat /dev/stdin >> "${file}"
done
exit 0
Some online searches suggest that printf be used in place of echo -e for more consistency across platforms. Other suggest that cat be used in place of read, though since stdin is a device, it should be able to be used in place of catm as in:
> tmpfile
IFS=\n
while read line
do
echo $line >> tmpfile
echo $line
done < /dev/stdin
unset IFS
Then the for loop follows. But I can't get that to work. How can I do it with bash?
But my real intent is to write a modified version that appends to existing file(s) rather than redo them from scratch.
The tee utility is specified to support an -a option, meaning "Append the output to the files." [spec]
(And I'm not aware of any implementations of tee that deviate from the spec in this regard.)
Edited to add: If your question is really "what's wrong with all the different things I tried", then, that's probably too broad for a single Stack Overflow question. But here's a short list:
var=(cat) means "Set the array variable var to contain a single element, namely, the string cat."
Note that this does not, in any way, involve the program cat.
You probably meant var=$(cat), which means "Run the command cat, capturing its standard output. Discard any null bytes, and discard any trailing sequence of newlines. Save the result in the regular variable var."
Note that even this version is not useful for faithfully implementing tee, since tee does not discard null bytes and trailing newlines. Also, tee forwards input as it becomes available, whereas var=$(cat) has to wait until input has completed. (This is a problem if standard input is coming from the terminal — in which case the user would expect to see their input echoed back — or from a program that might be trying to communicate with the user — in which case you'd get a deadlock.)
echo -e "$var" makes a point of processing escape sequences like \t. (That's what the -e means.) This is not what you want. In addition, it appends an extra newline, which isn't what you want if you've managed to set $var correctly. (If you haven't managed to set $var correctly, then this might help compensate for that, but it won't really fix the problem.)
To faithfully print the contents of var, you should write printf %s "$var".
I don't understand why you switched to the | tee tmpfile approach. It doesn't improve anything so far as I can tell, and it introduces the bug that now if you're copying to n files, then you will also write n copies to standard output. (You fixed that bug in later versions, though.)
The versions where you write directly to a file, instead of saving to a variable first, are a massive improvement in terms of faithfully copying the contents of standard input. But they still have the problem of waiting until input is complete.
The version where you cat /dev/stdin multiple times (once for each destination) won't work, because there's no "rewinding" of standard input. Once something is consumed, it's gone. (This makes sense when you consider that standard input is frequently passed around from program to program — your cat-s, for example, are inheriting it from your Bash script, and your Bash script may be inheriting it from the terminal. If some sort of automatic rewinding were to happen, how would it decide how far back to go?) (Note: if standard input is coming from a regular file, then it's possible to explicitly seek backward along it, and thereby "unconsume" already-consumed input. But that doesn't happen automatically, and anyway that's not possible when standard input is coming from a terminal, from a pipe, etc.)

Can I avoid using a FIFO file to join the end of a Bash pipeline to be stored in a variable in the current shell?

I have the following functions:
execIn ()
{
local STORE_INvar="${1}" ; shift
printf -v "${STORE_INvar}" '%s' "$( eval "$#" ; printf %s x ; )"
printf -v "${STORE_INvar}" '%s' "${!STORE_INvar%x}"
}
and
getFifo ()
{
local FIFOfile
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
while [ -e "${FIFOfile}" ]
do
FIFOfile="/tmp/diamondLang-FIFO-$$-${RANDOM}"
done
mkfifo "${FIFOfile}"
echo "${FIFOfile}"
}
I want to store the output of the end of a pipeline into a variable as given to a function at the end of the pipeline, however, the only way I have found to do this that will work in early versions of Bash is to use mkfifo to make a temp fifo file. I was hoping to use file descriptors to avoid having to create temporary files. So, This works, but is not ideal:
Set Up: (before I can do this I need to have assigned a FIFO file to a var that can be used by the rest of the process)
$ FIFOfile="$( getFifo )"
The Pipeline I want to persist:
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 # for e.g.
The action: (I can now add) >${FIFOfile} &
$ printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 >${FIFOfile} &
N.B. The need to background it with & - Problem 1: I get [1] <PID_NO> output to the screen.
The actual persist:
$ execIn SOME_VAR cat - <${FIFOfile}
Problem 2: I get more noise to the screen
[1]+ Done printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 > ${FIFOfile}
Problem 3: I loose the blanks at the start of the stream rather than at the end as I have experienced before.
So, am I doing this the right way? I am sure that there must be a way to avoid the need of a FIFO file that needs cleanup afterwards using file descriptors, but I cannot seem to do this as I cannot assign either side of the problem to a file descriptor that is not attached to a file or a FIFO file.
I can try and resolve the problems with what I have, although to make this work properly I guess I need to pre-establish a pool of FIFO files that can be pulled in to use or else I have a pre-req of establishing this file before the command. So, for many reasons this is far from ideal. If anyone can advise me of a better way you would make my day/week/month/life :)
Thanks in advance...
Process substitution was available in bash from the ancient days. You absolutely do not have a version so ancient as to be unable to use it. Thus, there's no need to use a FIFO at all:
readToVar() { IFS= read -r -d '' "$1"; }
readToVar targetVar < <(printf '\n\n123\n456\n524\n789\n\n\n')
You'll observe that:
printf '%q\n' "$targetVar"
...correctly preserves the leading newlines as well as the trailing ones.
By contrast, in a use case where you can't afford to lose stdin:
readToVar() { IFS= read -r -d '' "$1" <"$2"; }
readToVar targetVar <(printf '\n\n123\n456\n524\n789\n\n\n')
If you really want to pipe to this command, are willing to require a very modern bash, and don't mind being incompatible with job control:
set +m # disable job control
shopt -s lastpipe # in a pipeline, parent shell becomes right-hand side
readToVar() { IFS= read -r -d '' "$1"; }
printf '\n\n123\n456\n524\n789\n\n\n' | grep 2 | readToVar targetVar
The issues you claim to run into with using a FIFO do not actually exist. Put this in a script, and run it:
#!/bin/bash
trap 'rm -rf "$tempdir"' 0 # cleanup on exit
tempdir=$(mktemp -d -t fifodir.XXXXXX)
mkfifo "$tempdir/fifo"
printf '\n\n123\n456\n524\n789\n\n\n' >"$tempdir/fifo" &
IFS= read -r -d '' content <"$tempdir/fifo"
printf '%q\n' "$content" # print content to console
You'll notice that, when run in a script, there is no "noise" printed to the screen, because all that status is explicitly tied to job control, which is disabled by default in scripts.
You'll also notice that both leading and tailing newlines are correctly represented.
One idea, tell me I am crazy, might be to use the !! notation to grab the line just executed, e.g. if there is a command that can terminate a pipeline and stop it actually executing, whilst still as far as the shell is concerned, consider it as a successful execution, I am thinking something like the true command, I could then use !! to grab that line and call my existing function to execute it with process substitution or something. I could then wrap this into an alias, something like: alias streamTo=' | true ; LAST_EXEC="!!" ; myNewCommandVariation <<<' which I think could be used something like: $ cmd1 | cmd2 | myNewCommandVariation THE_VAR_NAME_TO_SET and the <<< from the alias would pass the var name to the command as an arg or stdin, either way, the command would be not at the end of a pipeline. How mad is this idea?
Not a full answer but rather a first point: is there some good reason not using mktemp for creating a new file with a random name? As far as I can see, your function called getFifo() doesn't perform much more.
mktemp -u
will give to you a free new name without creating anything; then you can use mkfifo with this name.

Read values into a shell variable from a pipe

I am trying to get bash to process data from stdin that gets piped into, but no luck. What I mean is none of the following work:
echo "hello world" | test=($(< /dev/stdin)); echo test=$test
test=
echo "hello world" | read test; echo test=$test
test=
echo "hello world" | test=`cat`; echo test=$test
test=
where I want the output to be test=hello world. I've tried putting "" quotes around "$test" that doesn't work either.
Use
IFS= read var << EOF
$(foo)
EOF
You can trick read into accepting from a pipe like this:
echo "hello world" | { read test; echo test=$test; }
or even write a function like this:
read_from_pipe() { read "$#" <&0; }
But there's no point - your variable assignments may not last! A pipeline may spawn a subshell, where the environment is inherited by value, not by reference. This is why read doesn't bother with input from a pipe - it's undefined.
FYI, http://www.etalabs.net/sh_tricks.html is a nifty collection of the cruft necessary to fight the oddities and incompatibilities of bourne shells, sh.
if you want to read in lots of data and work on each line separately you could use something like this:
cat myFile | while read x ; do echo $x ; done
if you want to split the lines up into multiple words you can use multiple variables in place of x like this:
cat myFile | while read x y ; do echo $y $x ; done
alternatively:
while read x y ; do echo $y $x ; done < myFile
But as soon as you start to want to do anything really clever with this sort of thing you're better going for some scripting language like perl where you could try something like this:
perl -ane 'print "$F[0]\n"' < myFile
There's a fairly steep learning curve with perl (or I guess any of these languages) but you'll find it a lot easier in the long run if you want to do anything but the simplest of scripts. I'd recommend the Perl Cookbook and, of course, The Perl Programming Language by Larry Wall et al.
This is another option
$ read test < <(echo hello world)
$ echo $test
hello world
read won't read from a pipe (or possibly the result is lost because the pipe creates a subshell). You can, however, use a here string in Bash:
$ read a b c <<< $(echo 1 2 3)
$ echo $a $b $c
1 2 3
But see #chepner's answer for information about lastpipe.
I'm no expert in Bash, but I wonder why this hasn't been proposed:
stdin=$(cat)
echo "$stdin"
One-liner proof that it works for me:
$ fortune | eval 'stdin=$(cat); echo "$stdin"'
bash 4.2 introduces the lastpipe option, which allows your code to work as written, by executing the last command in a pipeline in the current shell, rather than a subshell.
shopt -s lastpipe
echo "hello world" | read test; echo test=$test
A smart script that can both read data from PIPE and command line arguments:
#!/bin/bash
if [[ -p /dev/stdin ]]
then
PIPE=$(cat -)
echo "PIPE=$PIPE"
fi
echo "ARGS=$#"
Output:
$ bash test arg1 arg2
ARGS=arg1 arg2
$ echo pipe_data1 | bash test arg1 arg2
PIPE=pipe_data1
ARGS=arg1 arg2
Explanation: When a script receives any data via pipe, then the /dev/stdin (or /proc/self/fd/0) will be a symlink to a pipe.
/proc/self/fd/0 -> pipe:[155938]
If not, it will point to the current terminal:
/proc/self/fd/0 -> /dev/pts/5
The bash [[ -p option can check it it is a pipe or not.
cat - reads the from stdin.
If we use cat - when there is no stdin, it will wait forever, that is why we put it inside the if condition.
The syntax for an implicit pipe from a shell command into a bash variable is
var=$(command)
or
var=`command`
In your examples, you are piping data to an assignment statement, which does not expect any input.
In my eyes the best way to read from stdin in bash is the following one, which also lets you work on the lines before the input ends:
while read LINE; do
echo $LINE
done < /dev/stdin
The first attempt was pretty close. This variation should work:
echo "hello world" | { test=$(< /dev/stdin); echo "test=$test"; };
and the output is:
test=hello world
You need braces after the pipe to enclose the assignment to test and the echo.
Without the braces, the assignment to test (after the pipe) is in one shell, and the echo "test=$test" is in a separate shell which doesn't know about that assignment. That's why you were getting "test=" in the output instead of "test=hello world".
Because I fall for it, I would like to drop a note.
I found this thread, because I have to rewrite an old sh script
to be POSIX compatible.
This basically means to circumvent the pipe/subshell problem introduced by POSIX by rewriting code like this:
some_command | read a b c
into:
read a b c << EOF
$(some_command)
EOF
And code like this:
some_command |
while read a b c; do
# something
done
into:
while read a b c; do
# something
done << EOF
$(some_command)
EOF
But the latter does not behave the same on empty input.
With the old notation the while loop is not entered on empty input,
but in POSIX notation it is!
I think it's due to the newline before EOF,
which cannot be ommitted.
The POSIX code which behaves more like the old notation
looks like this:
while read a b c; do
case $a in ("") break; esac
# something
done << EOF
$(some_command)
EOF
In most cases this should be good enough.
But unfortunately this still behaves not exactly like the old notation
if some_command prints an empty line.
In the old notation the while body is executed
and in POSIX notation we break in front of the body.
An approach to fix this might look like this:
while read a b c; do
case $a in ("something_guaranteed_not_to_be_printed_by_some_command") break; esac
# something
done << EOF
$(some_command)
echo "something_guaranteed_not_to_be_printed_by_some_command"
EOF
Piping something into an expression involving an assignment doesn't behave like that.
Instead, try:
test=$(echo "hello world"); echo test=$test
The following code:
echo "hello world" | ( test=($(< /dev/stdin)); echo test=$test )
will work too, but it will open another new sub-shell after the pipe, where
echo "hello world" | { test=($(< /dev/stdin)); echo test=$test; }
won't.
I had to disable job control to make use of chepnars' method (I was running this command from terminal):
set +m;shopt -s lastpipe
echo "hello world" | read test; echo test=$test
echo "hello world" | test="$(</dev/stdin)"; echo test=$test
Bash Manual says:
lastpipe
If set, and job control is not active, the shell runs the last command
of a pipeline not executed in the background in the current shell
environment.
Note: job control is turned off by default in a non-interactive shell and thus you don't need the set +m inside a script.
I think you were trying to write a shell script which could take input from stdin.
but while you are trying it to do it inline, you got lost trying to create that test= variable.
I think it does not make much sense to do it inline, and that's why it does not work the way you expect.
I was trying to reduce
$( ... | head -n $X | tail -n 1 )
to get a specific line from various input.
so I could type...
cat program_file.c | line 34
so I need a small shell program able to read from stdin. like you do.
22:14 ~ $ cat ~/bin/line
#!/bin/sh
if [ $# -ne 1 ]; then echo enter a line number to display; exit; fi
cat | head -n $1 | tail -n 1
22:16 ~ $
there you go.
The questions is how to catch output from a command to save in variable(s) for use later in a script. I might repeat some earlier answers but I try to line up all the answers I can think up to compare and comment, so bear with me.
The intuitive construct
echo test | read x
echo x=$x
is valid in Korn shell because ksh have implemented that the last command in a piped series is part of the current shell ie. the previous pipe commands are subshells. In contrast other shells define all piped commands as subshells including the last.
This is the exact reason I prefer ksh.
But having to copy with other shells, bash f.ex., another construct must be used.
To catch 1 value this construct is viable:
x=$(echo test)
echo x=$x
But that only caters for 1 value to be collected for later use.
To catch more values this construct is useful and works in bash and ksh:
read x y <<< $(echo test again)
echo x=$x y=$y
There is a variant which I have noticed work in bash but not in ksh:
read x y < <(echo test again)
echo x=$x y=$y
The <<< $(...) is a here-document variant which gives all the meta handling of a standard command line. < <(...) is an input redirection of a file-substitution operator.
I use "<<< $(" in all my scripts now because it seems the most portable construct between shell variants. I have a tools set I carry around on jobs in any Unix flavor.
Of course there is the universally viable but crude solution:
command-1 | {command-2; echo "x=test; y=again" > file.tmp; chmod 700 file.tmp}
. ./file.tmp
rm file.tmp
echo x=$x y=$y
I wanted something similar - a function that parses a string that can be passed as a parameter or piped.
I came up with a solution as below (works as #!/bin/sh and as #!/bin/bash)
#!/bin/sh
set -eu
my_func() {
local content=""
# if the first param is an empty string or is not set
if [ -z ${1+x} ]; then
# read content from a pipe if passed or from a user input if not passed
while read line; do content="${content}$line"; done < /dev/stdin
# first param was set (it may be an empty string)
else
content="$1"
fi
echo "Content: '$content'";
}
printf "0. $(my_func "")\n"
printf "1. $(my_func "one")\n"
printf "2. $(echo "two" | my_func)\n"
printf "3. $(my_func)\n"
printf "End\n"
Outputs:
0. Content: ''
1. Content: 'one'
2. Content: 'two'
typed text
3. Content: 'typed text'
End
For the last case (3.) you need to type, hit enter and CTRL+D to end the input.
How about this:
echo "hello world" | echo test=$(cat)

Resources