Using named pipes to create a 'loop' - linux

I'm very new to shell scripting and I am trying to get to grips with piping. I could be heading in completely the wrong direction here...
What I have is a shell script that contains a simple while true loop, within this loop I am getting netcat to listen on a specified port and piping input to a binary file that is awaiting for commands through stdin. This is Script-A
I have a second shell script that accepts input as arguments, it then echos those arguments to the port that netcat is listening on. This is Script-B
My aim is to get the returning output from the binary file located in Script-A into Script-B via Netcat so that it can be returned via stdout. The binary file has to be initialized and awaiting input.
This is what I have:
Script-A
while true; do
nc -kl 1234 | /binarylocation/ --readargumentsfromstdinflag
done
Script-B
foo=$(echo "$*" | nc localhost 1234)
echo "$foo"
With this setup, the output of the binary file is done via Script-A
After doing some research I got to this point, I am trying to use a named pipe to create a sort of loop from the binary file back to netcat, it's still not working -
Script-A
mkfifo foobar
while true; do
nc -kl 1234 < foobar | /binarylocation/ --readargumentsfromstdinflag > foobar
done
Script-B hasn't changed.
Bear in mind my shell scripting experience stems over a period of about a single day, thank you.

The problem is in your script B.. netcat reads from STDIN and exits immediately when STDIN is closed, not waiting for the response.
you will realize when you do this:
foo=$( ( echo -e "$*"; sleep 2 ) | nc localhost 1234)
echo "$foo"
nc has a parameter for the stdin behaviour..
-q after EOF on stdin, wait the specified number of seconds and
then quit. If seconds is negative, wait forever.`
So you should do:
foo=$( echo -e "$*" | nc -q5 localhost 1234)
echo "$foo"

Related

ncat echo server, write incoming message to shell?

Is there a way to get ncat to print what it receives to the terminal before it echos back on the port with cat, Looks like the executed code doesn't have access to stdio?
I think tee is supposed copy stdout to cat with something like this, but I don't see any output.
ncat -e '/usr/bin/tee > (/bin/cat)' -l -p 2048

Explanation needed for tee, process substitution, redirect...and different behaviors in Bash and Z shell ('zsh')

Recently in my work, I am facing an interesting problem regarding tee and process substitution.
Let's start with examples:
I have three little scripts:
$ head *.sh
File one.sh
#!/bin/bash
echo "one starts"
if [ -p /dev/stdin ]; then
echo "$(cat /dev/stdin) from one"
else
echo "no stdin"
fi
File two.sh
#!/bin/bash
echo "two starts"
if [ -p /dev/stdin ]; then
echo "$(cat /dev/stdin) from two"
else
echo "no stdin"
fi
File three.sh
#!/bin/bash
echo "three starts"
if [ -p /dev/stdin ]; then
sed 's/^/stdin for three: /' /dev/stdin
else
echo "no stdin"
fi
All three scripts read from standard input and print something to standard output.
The one.sh and two.sh are quite similar, but the three.sh is a bit different. It just adds some prefix to show what it reads from the standard input.
Now I am going to execute two commands:
1: echo "hello" | tee >(./one.sh) >(./two.sh) | ./three.sh
2: echo "hello" | tee >(./one.sh) >(./two.sh) >(./three.sh) >/dev/null
First in Bash and then in Z shell (zsh).
Bash (GNU bash, version 5.0.17(1))
$ echo "hello" | tee >(./one.sh) >(./two.sh) |./three.sh
three starts
stdin for three: hello
stdin for three: one starts
stdin for three: two starts
stdin for three: hello from two
stdin for three: hello from one
Why are the outputs of one.sh and two.sh mixed with the origin "hello" and passed to three.sh? I expected to see the output of one and two in standard output and only the "hello" is going to pass to three.sh.
Now the other command:
$ echo "hello" | tee >(./one.sh) >(./two.sh) >(./three.sh) >/dev/null
one starts
two starts
three starts
stdin for three: hello
hello from two
hello from one
<---!!!note here I don't have prompt unless I press Enter or Ctrl-c)
I redirect all standard output to /dev/null. Why do I see all output from all process substitution this time? Does it seem this behavior conflict with the one above?
Why don't I have the prompt after having executed the command?
Why does the command start in order one->two->three, but outputs come in 3->2->1? Even if I added sleep 3 in three.sh, the output is always 3-2-1. I know it should have something to do with standard input blocking, but I'd learn the exact reason.
Zsh (zsh 5.8 (x86_64-pc-linux-gnu))
Both commands,
echo "hello" | tee >(./one.sh) >(./two.sh) >(./three.sh) >/dev/null
echo "hello" | tee >(./one.sh) >(./two.sh) |./three.sh
Give the expected result:
one starts
three starts
two starts
hello from two
hello from one
stdin for three: hello
It works as expected. But the order of the output is random, it seems that Z shell does something non-blocking here, and the order of the output is dependent on how long each script has been running. What exactly leads to the result?
echo "hello"|tee >(./one.sh) >(./two.sh) |./three.sh
There are two possible order of operations for the tee part of the pipeline
First
Redirect standard output to a pipe that's connected to ./three.sh's standard input.
Set up the pipes and subprocesses for the command substitutions. They inherit the same redirected standard output pipe used by tee.
Execute tee.
Second
Set up the pipes and subprocesses for the the command substitutions. They share the same default standard output - to the terminal.
Redirect tee's standard output to a pipe that's connected to ./three.sh's standard input. This redirection doesn't affect the pipes set up in step 1.
Execute tee.
bash uses the first set of operations, zsh uses the second. In both cases, the order you see output from your shell scripts in is controlled by your OS's process scheduler and might as well be random. In the case where you redirect tee's standard output to /dev/null, they both seem to follow the second scenario and set up the subprocesses before the parent tee's redirection. This inconsistency on bash's part does seem unusual and a potential source of subtle bugs.
I can't replicate the missing prompt issue, but that's with bash 4.4.20 - I don't have 5 installed on this computer.

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

'read -r' doesn't read beyond first line in a loop that does ssh [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

pseudo-terminal error will not be allocated because stdin is not a terminal - sudo

There are other threads with this same topic but my issue is unique. I am running a bash script that has a function that sshes to a remote server and runs a sudo command on the remote server. I'm using the ssh -t option to avoid the requiretty issue. The offending line of code works fine as long as it's NOT being called from within the while loop. The while loop basically reads from a csv file on the local server and calls the checkAuthType function:
while read inputline
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done < configfile.csv
This is the function that sits at the top of the script (outside of any while loops):
function checkAuthType()
{
if [ $2 == linux ]; then
LINE=`ssh -t $1 'sudo grep "PasswordAuthentication" /etc/ssh/sshd_config | grep -v "yes\|Yes\|#"'`
fi
if [ $2 == unix ]; then
LINE=`ssh -n $1 'grep "PasswordAuthentication" /usr/local/etc/sshd_config | grep -v "yes\|Yes\|#"'`
fi
<more irrelevant code>
}
So, the offending line is the line that has the sudo command within the function. I can change the command to something simple like "sudo ls -l" and I will still get the "stdin is not a terminal" error. I've also tried "ssh -t -t" but to no avail. But if I call the checkAuthType function from outside of the while loop, it works fine. What is it about the while loop that changes the terminal and how do I fix it? Thank you one thousand times in advance.
Another option to try to get around the problem would be to redirect the file to a different file descriptor and force read to read from it instead.
while read inputline <&3
do
ARRAY=(`echo $inputline | tr ',' ' '`)
HOSTNAME=${ARRAY[0]}
OS_TYPE=${ARRAY[1]}
checkAuthType $HOSTNAME $OS_TYPE
<more irrelevant code>
done 3< configfile.csv
I am guessing you are testing with linux. You should try add the -n flag to your (linux) ssh command to avoid having ssh read from stdin - as it normally reads from stdin the while loop is feeding it your csv.
UPDATE
You should (usually) use the -n flag when scripting with SSH, and the flag is typically needed for 'expected behavior' when using a while read-loop. It does not seem to be the main issue here, though.
There are probably other solutions to this, but you could try adding another -t flag to force pseudo-tty allocation when stdin is not a terminal:
ssh -n -t -t
BroSlow's approach with a different file descriptor seems to work! Since the read command reads from fd 3 and not stdin,
ssh and hence sudo still have or get a tty/pty as stdin.
# simple test case
while read line <&3; do
sudo -k
echo "$line"
ssh -t localhost 'sudo ls -ld /'
done 3<&- 3< <(echo 1; sleep 3; echo 2; sleep 3)

Resources