Iterate through a list using 'while read' using bash - linux

I have a list of IP addresses, and my end goal is to ssh into each one, and reset them one-at-a-time. I was asked to use Linux / Bash, which I am not extremely familiar. My code right now will take the first IP from the list, and connect to it, but it never moves on past that point. I believe the issue is somewhere between the while read oneip3 and do code. Any help is greatly appreciated.
The way I run this script is as follows: (I have a list of IP addresses in a separate text file):
./runscript.txt ip_list.txt
while read oneip3
do
(sleep 5
echo "yes\r"
sleep 3
echo -e "password\r"
sleep 3
echo -e "reset\r"
sleep 3
echo -e "yes\r"
sleep 20
echo -e "\r"
) | ssh -t -t -oHostKeyAlgorithms=+ssh-dss admin#$oneip3
done < $1

You didn't provide SSH argument. So it opens an interactive shell.
It is a good reason to be stuck on the first machine (maybe there is other reason...)
Try this to debug
... | ssh -t -t -oHostKeyAlgorithms=+ssh-dss "admin#$oneip3" pwd
Other remarks in comment about StrictHostKeyChecking seams good too (if you are really concern by security, you can deploy all needed keys by hand firstly)

Related

How do I properly use SSH heredoc?

This question is somewhat related to the question I asked here, but it has not been adequately answered. What interests me here is the following:
When I run the command type -t test on a remote computer, I get the answer 'function' because the 'test' is an existing function inside the .bashrc file on the remote computer.
However, when I run this SSH command on the local computer,
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f test)
echo $VAR
EOSSH
)"
echo $s
I don't get anything printed. The first question would be how do I make this work?
The second question builds on the previous one. That is, my ultimate goal is to define on a local computer which function I want to check on a remote computer and come up with an adequate answer, ie.:
a="test"
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f $a)
echo $VAR
EOSSH
)"
echo $s
So, I would like the variable s to be equal to 'function'. How to do it?
how do I make this work?
Either load .bashrc (. .bashrc) or start an interactive session (bash -i).
Because your work is not-interactive, if you want .bashrc loaded and it has no protection against non-interactive use, just load it. If not, maybe move your function somewhere else, to something you can source. If not, be prepared that interactive session may print /etc/motd and /etc/issue and other interactive stuff.
Remove -T - you do not need a tty for non-interactive work.
I would like the variable s to be equal to 'function'. How to do it?
I recommend using declare to transfer all the work and context that you need, which is flexible and works generically, preserves STDIN and doesn't require you to deal with the intricacies escaping inside a here document. Specifically request bash shell from the remote and use printf "%q" to properly escape all the data.
functions_to_check=(a b c)
fn_exists() { [[ "$(LC_ALL=C type -t -- "$1" 2>/dev/null)" = function ]]; }
work() {
for f in "${functions_to_check[#]}"; do
if fn_exists "$f"; then
echo "Great - function $f exists!"
else
echo "Och nuu - no function $f!"
fi
done
}
ssh "$host" "$(printf "%q " bash -c "
$(declare -p function_to_check) # transfer variables
$(declare -f fn_exists work) # transfer functions
work # run the work to do
")"

Automated telnet using shell with output logging

I would like to write a automated script to open telnet session and run some commands. The thing is, that this will be some kind of "logging", so i have to open pipe, and send some commands, and store outputs. I know, how to do this in a while loop like:
(while true
do
echo ${user}
sleep 1
echo ${pass}
sleep 1
echo ${something}
.
.
done)|telnet ${IP}
The problem here is that the telnet pipe is opened/closed in every loop and i want to achieve to open it at the beginning, and then send commands in a loop until some conditions are true.
NOTE: i am limited with commands as i am working with emb.system (such as spawn, expect, etc...)
Thanks for your help ! :)
BR.
Does this work for you?
(echo ${user}
sleep 1
echo ${pass}
sleep 1
while true; do
echo ${something} | tee -a /tmp/logfile.txt
.
.
done
echo "exit") | telnet ${IP} | tee -a /tmp/logfile.txt
you can use sshpass soft.
http://sourceforge.net/projects/sshpass/
tar -zxvf sshpass-1.05.tar.gz
cd sshpass-1.05
./configure
make && make install
.............

Grep not working in script but on console

I have a problem with a script. I have a voltage meter connected to a serial USB device(ttyUSB1).
The smart meter needs an initial sequence and shortly followed by a second command to give all of it's information. That works fine. 1.8.0*00(000898.46) for example comes in this is the line I am interested in. The number in brackets is the kWh number i want. If i open a second terminal and do a cat /dev/ttyUSB1 it works fine and i can see the information coming in. After 4 to 5 seconds the line I want comes in. But the script is not working. If i start a script in one terminal it keeps waiting. Grep is not finishing. If I start it in a second terminal then the first terminal gets finished. Or just the grep 1.8.0 /dev/ttyUSB1 -m1 in another terminal works but not in the script.
I tried different methos with read and so none worked. To be honest i don't understand much of scripting and always succeed somehow but here nothings helped :(
Please help. Thank you!
Arne
here the script:
#! /bin/bash
echo start
echo $'\x2f\x3f\x21\x0d' > /dev/ttyUSB1
sleep 1
echo ask
echo $'\x06\x30\x30\x30\x0d' > /dev/ttyUSB1
echo wait
grep 1.8.0 /dev/ttyUSB1 -m1
echo end
You can try creating a file with voltimeter's output and grep from that file:
#! /bin/bash
dev=/dev/ttyUSB1
file=/tmp/testfile
(tail -f $dev | tee $file) & # let's continuously copy in background
echo start
echo $'\x2f\x3f\x21\x0d' > $dev
sleep 1
echo ask
echo $'\x06\x30\x30\x30\x0d' > $dev
echo wait
grep 1.8.0 $file # lets get the info from the file instead
echo end
sleep 1
exit

linux pipe with multiple programs asking for user input

I wonder how to create a pipe
program 1 | ... | program N
where multiple of the programs ask for user input. The problem is that | starts the programs in parallel and thus they start reading from the terminal in parallel.
For such cases it would be useful to have a pipe | that starts program (i+1) only after program i has produced some output.
Edit:
Example:
cat /dev/sda | bzip2 | gpg -c | ssh user#host 'cat > backup'
Here both gpg -c as well as ssh ask for a password.
A workaround for this particular example would be the creation of ssh key pairs, but this is not possible on every system, and I was wondering whether there is a general solution.
Also gpg allows for the passphrase to be passed as command line argument, but this is not suggested for security reasons.
You can use this construction:
(read a; echo "$a"; cat) > file
For example:
$ (read a; echo "$a"; echo cat is started > /dev/stderr; cat) > file
1
cat is started
2
3
Here 1, 2 and 3 were entered from keyboard; cat is started was written by echo.
Contents of file after execution of the command:
$ cat file
1
2
3
I am now using:
#!/bin/bash
sudo echo "I am root!"
sudo cat /dev/disk0 | bzip2 | gpg -c | (read -n 1 a; (echo -n "$a"; cat) | ssh user#host 'cat > backup')
The first sudo will prevent the second from asking the password again. As suggested above, the read postpones the starting of ssh. I used -n 1 for read since I don't want to wait for newline, and -n for echo to surpress the newline.
for one you can give gpg the password with the --passphrase option.
For ssh the best solution would be to login by key. But if you need to do by password the expect command will be good. Here's a good example: Use expect in bash script to provide password to SSH command
Expect also allows you to have some input - so if you don't want to hardcode your passwords this might be the way to go.
I've needed something similar a few times before, where the first command in the pipeline requires a password to be entered, and the next command doesn't automatically cater for this (like the way that less does).
Similar to Igor's response, I find the use of read inside a subshell useful:
cmd1 | ( read; cat - | cmd2 )

Is it possible to make a bash shell script interact with another command line program?

I am using a interactive command line program in a Linux terminal running the bash shell. I have a definite sequence of command that I input to the shell program. The program writes its output to standard output. One of these commands is a 'save' command, that writes the output of the previous command that was run, to a file to disk.
A typical cycle is:
$prog
$$cmdx
$$<some output>
$$save <filename>
$$cmdy
$$<again, some output>
$$save <filename>
$$q
$<back to bash shell>
$ is the bash prompt
$$ is the program's prompt
q is the quit command for prog
prog is such that it appends the output of the previous command to filename
How can I automate this process? I would like to write a shell script that can start this program, and cycle through the steps, feeding it the commands one by one and, and then quitting. I hope the save command works correctly.
If your command doesn't care how fast you give it input, and you don't really need to interact with it, then you can use a heredoc.
Example:
#!/bin/bash
prog <<EOD
cmdx
save filex
cmdy
save filey
q
EOD
If you need branching based on the output of the program, or if your program is at all sensitive to the timing of your commands, then Expect is what you want.
I recommend you use Expect. This tool is designed to automate interactive shell applications.
Where there's a need, there's a way! I think that it's a good bash lesson to see
how process management and ipc works. The best solution is, of course, Expect.
But the real reason is that pipes can be tricky and many commands are designed
to wait for data, meaning that the process will become a zombie for reasons that
bay be difficult to predict. But learning how and why reminds us of what is
going on under the hood.
When two processes engage in a conversation, the danger is that one or both will
try to read data that will never arrive. The rules of engagement have to be
crystal clear. Things like CRLF and character encoding can kill the party.
Luckily, two close partners like a bash script and its child process are
relatively easy to keep in line. The easiest thing to miss is that bash is
launching a child process for just about every thing it does. If you can make it
work with bash, you thoroughly know what you're doing.
The point is that we want to talk to another process. Here's a server:
# a really bad SMTP server
# a hint at courtesy to the client
shopt -s nocasematch
echo "220 $HOSTNAME SMTP [$$]"
while true
do
read
[[ "$REPLY" =~ ^helo\ [^\ ] ]] && break
[[ "$REPLY" =~ ^quit ]] && echo "Later" && exit
echo 503 5.5.1 Nice guys say hello.
done
NAME=`echo "$REPLY" | sed -r -e 's/^helo //i'`
echo 250 Hello there, $NAME
while read
do
[[ "$REPLY" =~ ^mail\ from: ]] && { echo 250 2.1.0 Good guess...; continue; }
[[ "$REPLY" =~ ^rcpt\ to: ]] && { echo 250 2.1.0 Keep trying...; continue; }
[[ "$REPLY" =~ ^quit ]] && { echo Later, $NAME; exit; }
echo 502 5.5.2 Please just QUIT
done
echo Pipe closed, exiting
Now, the script that hopefully does the magic.
# Talk to a subprocess using named pipes
rm -fr A B # don't use old pipes
mkfifo A B
# server will listen to A and send to B
./smtp.sh < A > B &
# If we write to A, the pipe will be closed.
# That doesn't happen when writing to a file handle.
exec 3>A
read < B
echo "$REPLY"
# send an email, so long as response codes look good
while read L
do
echo "> $L"
echo $L > A
read < B
echo $REPLY
[[ "$REPLY" =~ ^2 ]] || break
done <<EOF
HELO me
MAIL FROM: me
RCPT TO: you
DATA
Subject: Nothing
Message
.
EOF
# This is tricky, and the reason sane people use Expect. If we
# send QUIT and then wait on B (ie. cat B) we may have trouble.
# If the server exits, the "Later" response in the pipe might
# disappear, leaving the cat command (and us) waiting for data.
# So, let cat have our STDOUT and move on.
cat B &
# Now, we should wait for the cat process to get going before we
# send the QUIT command. If we don't, the server will exit, the
# pipe will empty and cat will miss its chance to show the
# server's final words.
echo -n > B # also, 'sleep 1' will probably work.
echo "> quit"
echo "quit" > A
# close the file handle
exec 3>&-
rm A B
Notice that we are not simply dumping the SMTP commands on the server. We check
each response code to make sure things are OK. In this case, things will not be
OK and the script will bail.
I use Expect to interact with the shell for switch and router backups. A bash script calls the expect script with the correct variables.
for i in <list of machines> ; do expect_script.sh $i ; exit
This will ssh to each box, run the backup commands, copy out the appropriate files, and then move on to the next box.
For simple use cases you may use a combination of subshell, echo & sleep:
# in Terminal.app
telnet localhost 25
helo localhost
ehlo localhost
quit
(sleep 5; echo "helo localhost"; sleep 5; echo "ehlo localhost"; sleep 5; echo quit ) |
telnet localhost 25
echo "cmdx\nsave\n...etc..." | prog
..?

Resources