Catch CTRL +D in ksh - linux

How can I catch the CTRL+D keyboard in ksh and after exit?
while true; do
read cmd
echo $cmd
if [ "$cmd" = "?????" ]; then
break
fi
done

CTRL-D is "end-of-file (EOF)" and what you need to do is "trap" that input. Unfortunately, there isn't a EOF character -- when you hit the chord CTRL-D, the tty sends a signal to the reading application that the input stream has finished by returning a sentinel value to make it exit.
To prevent this character from being treated as further input, it has to be an special character (e.g. something out of range like -1). When the terminal detects such a character, it buffers all its characters so that the input is empty, which in turn makes your program read zero bytes (end of file).
You may be running into an issue because the read command exits when receiving eof, so your script was likely terminating before getting to your conditional. This has the side effect of breaking out of the script, but likely not in the way you wanted.
Try:
#!/usr/bin/env ksh
# unmap so we can control the behavior
stty eof ^D
while true; do
read cmd
echo $cmd
# are we out of input? let's get a taco
if [ ! -n "$cmd" ]; then
echo 'taco time'
fi
done
When you get to other signals (like ^C) the general form is:
trap [commands] [signals to catch]

Just use while read:
while read cmd; do
echo $cmd
done

Related

bash script loop breaks [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

'read -r' doesn't read beyond first line in a loop that does ssh [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

How to read just a single character in shell script

I want similar option like getche() in C. How can I read just a single character input from command line?
Using read command can we do it?
In bash, read can do it:
read -n1 ans
read -n1 works for bash
The stty raw mode prevents ctrl-c from working and can get you stuck in an input loop with no way out. Also the man page says stty -raw is not guaranteed to return your terminal to the same state.
So, building on dtmilano's answer using stty -icanon -echo avoids those issues.
#/bin/ksh
## /bin/{ksh,sh,zsh,...}
# read_char var
read_char() {
stty -icanon -echo
eval "$1=\$(dd bs=1 count=1 2>/dev/null)"
stty icanon echo
}
read_char char
echo "got $char"
In ksh you can basically do:
stty raw
REPLY=$(dd bs=1 count=1 2> /dev/null)
stty -raw
read -n1
reads exactly one character from input
echo "$REPLY"
prints the result on the screen
doc: https://www.computerhope.com/unix/bash/read.htm
Some people mean with "input from command line" an argument given to the command instead reading from STDIN... so please don't shoot me. But i have a (maybe not most sophisticated) solution for STDIN, too!
When using bash and having the data in a variable you can use parameter expansion
${parameter:offset:length}
and of course you can perform that on given args ($1, $2, $3, etc.)
Script
#!/usr/bin/env bash
testdata1="1234"
testdata2="abcd"
echo ${testdata1:0:1}
echo ${testdata2:0:1}
echo ${1:0:1} # argument #1 from command line
Execution
$ ./test.sh foo
1
a
f
reading from STDIN
Script
#!/usr/bin/env bash
echo please type in your message:
read message
echo 1st char: ${message:0:1}
Execution
$ ./test.sh
please type in your message:
Foo
1st char: F

Is it possible to make a bash shell script interact with another command line program?

I am using a interactive command line program in a Linux terminal running the bash shell. I have a definite sequence of command that I input to the shell program. The program writes its output to standard output. One of these commands is a 'save' command, that writes the output of the previous command that was run, to a file to disk.
A typical cycle is:
$prog
$$cmdx
$$<some output>
$$save <filename>
$$cmdy
$$<again, some output>
$$save <filename>
$$q
$<back to bash shell>
$ is the bash prompt
$$ is the program's prompt
q is the quit command for prog
prog is such that it appends the output of the previous command to filename
How can I automate this process? I would like to write a shell script that can start this program, and cycle through the steps, feeding it the commands one by one and, and then quitting. I hope the save command works correctly.
If your command doesn't care how fast you give it input, and you don't really need to interact with it, then you can use a heredoc.
Example:
#!/bin/bash
prog <<EOD
cmdx
save filex
cmdy
save filey
q
EOD
If you need branching based on the output of the program, or if your program is at all sensitive to the timing of your commands, then Expect is what you want.
I recommend you use Expect. This tool is designed to automate interactive shell applications.
Where there's a need, there's a way! I think that it's a good bash lesson to see
how process management and ipc works. The best solution is, of course, Expect.
But the real reason is that pipes can be tricky and many commands are designed
to wait for data, meaning that the process will become a zombie for reasons that
bay be difficult to predict. But learning how and why reminds us of what is
going on under the hood.
When two processes engage in a conversation, the danger is that one or both will
try to read data that will never arrive. The rules of engagement have to be
crystal clear. Things like CRLF and character encoding can kill the party.
Luckily, two close partners like a bash script and its child process are
relatively easy to keep in line. The easiest thing to miss is that bash is
launching a child process for just about every thing it does. If you can make it
work with bash, you thoroughly know what you're doing.
The point is that we want to talk to another process. Here's a server:
# a really bad SMTP server
# a hint at courtesy to the client
shopt -s nocasematch
echo "220 $HOSTNAME SMTP [$$]"
while true
do
read
[[ "$REPLY" =~ ^helo\ [^\ ] ]] && break
[[ "$REPLY" =~ ^quit ]] && echo "Later" && exit
echo 503 5.5.1 Nice guys say hello.
done
NAME=`echo "$REPLY" | sed -r -e 's/^helo //i'`
echo 250 Hello there, $NAME
while read
do
[[ "$REPLY" =~ ^mail\ from: ]] && { echo 250 2.1.0 Good guess...; continue; }
[[ "$REPLY" =~ ^rcpt\ to: ]] && { echo 250 2.1.0 Keep trying...; continue; }
[[ "$REPLY" =~ ^quit ]] && { echo Later, $NAME; exit; }
echo 502 5.5.2 Please just QUIT
done
echo Pipe closed, exiting
Now, the script that hopefully does the magic.
# Talk to a subprocess using named pipes
rm -fr A B # don't use old pipes
mkfifo A B
# server will listen to A and send to B
./smtp.sh < A > B &
# If we write to A, the pipe will be closed.
# That doesn't happen when writing to a file handle.
exec 3>A
read < B
echo "$REPLY"
# send an email, so long as response codes look good
while read L
do
echo "> $L"
echo $L > A
read < B
echo $REPLY
[[ "$REPLY" =~ ^2 ]] || break
done <<EOF
HELO me
MAIL FROM: me
RCPT TO: you
DATA
Subject: Nothing
Message
.
EOF
# This is tricky, and the reason sane people use Expect. If we
# send QUIT and then wait on B (ie. cat B) we may have trouble.
# If the server exits, the "Later" response in the pipe might
# disappear, leaving the cat command (and us) waiting for data.
# So, let cat have our STDOUT and move on.
cat B &
# Now, we should wait for the cat process to get going before we
# send the QUIT command. If we don't, the server will exit, the
# pipe will empty and cat will miss its chance to show the
# server's final words.
echo -n > B # also, 'sleep 1' will probably work.
echo "> quit"
echo "quit" > A
# close the file handle
exec 3>&-
rm A B
Notice that we are not simply dumping the SMTP commands on the server. We check
each response code to make sure things are OK. In this case, things will not be
OK and the script will bail.
I use Expect to interact with the shell for switch and router backups. A bash script calls the expect script with the correct variables.
for i in <list of machines> ; do expect_script.sh $i ; exit
This will ssh to each box, run the backup commands, copy out the appropriate files, and then move on to the next box.
For simple use cases you may use a combination of subshell, echo & sleep:
# in Terminal.app
telnet localhost 25
helo localhost
ehlo localhost
quit
(sleep 5; echo "helo localhost"; sleep 5; echo "ehlo localhost"; sleep 5; echo quit ) |
telnet localhost 25
echo "cmdx\nsave\n...etc..." | prog
..?

Resources