Run Bash Script followed by keyword or Variable - linux

I tried to search for an answer online but found it hard to phrase the search criteria to find what I'm looking for. For starters I'm a total noob to coding/scripting.
Where I have gotten is understanding how to use the read function inside a script to create variables from user input which is great. My question is can you run a script followed by a keyword which would signify a hostname(target) device. This would look something like this.
sample.sh hostname
Currently my understanding only allows me add the hostname into a variables after the script is ran but this seems a little slower. A case of what I'd like is a generic script that checks interface status on a networking switch. It would be great to have the script setup in a way that I could run it followed by a hostname to just quickly check that single device. I think I am missing some fundamental understanding.
I hope I described that clearly, thanks for any help in advance. I have a sample bash script below and the expect script that it's passed onto
#!/bin/bash
echo "What device do you want to check TRUNK links?"
read -e host
echo "Username:"
read -e user
echo "Password:"
read -s -e password
./show-trunk.exp $host $user $password;
#!/usr/bin/expect -f
set host [lindex $argv 0]
set user [lindex $argv 1]
set password [lindex $argv 2]
#spawn echo "Running Script..."
log_user 0
spawn ssh $username#$devices
expect "*assword:"
send "$password\r"
expect "#"
log_user 1
send "show int desc | in TRUNK\r"
expect "#"
send "exit\r"

There are special variables in bash that signify the arguments put into them. You can specify them like this:
#!/usr/bin/env bash
echo $1 $2 $3
Then to run the script on the command line:
~> ./test.sh hello bash user
hello bash user
If you need to test for the correct number of arguments, you can do a simple if statement like this:
#!/usr/bin/env bash
if [[ "$#" -eq 3 ]]; do
echo $1 $2 $3
else
echo "Not enough arguments"
fi
Then if you call it on the command line:
~> ./test.sh hello bash user
hello bash user
~> ./test.sh oops
Not enough arguments

Related

How do I properly use SSH heredoc?

This question is somewhat related to the question I asked here, but it has not been adequately answered. What interests me here is the following:
When I run the command type -t test on a remote computer, I get the answer 'function' because the 'test' is an existing function inside the .bashrc file on the remote computer.
However, when I run this SSH command on the local computer,
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f test)
echo $VAR
EOSSH
)"
echo $s
I don't get anything printed. The first question would be how do I make this work?
The second question builds on the previous one. That is, my ultimate goal is to define on a local computer which function I want to check on a remote computer and come up with an adequate answer, ie.:
a="test"
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f $a)
echo $VAR
EOSSH
)"
echo $s
So, I would like the variable s to be equal to 'function'. How to do it?
how do I make this work?
Either load .bashrc (. .bashrc) or start an interactive session (bash -i).
Because your work is not-interactive, if you want .bashrc loaded and it has no protection against non-interactive use, just load it. If not, maybe move your function somewhere else, to something you can source. If not, be prepared that interactive session may print /etc/motd and /etc/issue and other interactive stuff.
Remove -T - you do not need a tty for non-interactive work.
I would like the variable s to be equal to 'function'. How to do it?
I recommend using declare to transfer all the work and context that you need, which is flexible and works generically, preserves STDIN and doesn't require you to deal with the intricacies escaping inside a here document. Specifically request bash shell from the remote and use printf "%q" to properly escape all the data.
functions_to_check=(a b c)
fn_exists() { [[ "$(LC_ALL=C type -t -- "$1" 2>/dev/null)" = function ]]; }
work() {
for f in "${functions_to_check[#]}"; do
if fn_exists "$f"; then
echo "Great - function $f exists!"
else
echo "Och nuu - no function $f!"
fi
done
}
ssh "$host" "$(printf "%q " bash -c "
$(declare -p function_to_check) # transfer variables
$(declare -f fn_exists work) # transfer functions
work # run the work to do
")"

Positional parameter not readable inside the variable - bash script

I have a problem with the small script I am working on, can you please explain why is this not working:
#!/bin/bash
var1=$( linux command to list ldap users | grep "user: $1")
echo $var1
So, when I deploy my script ( ./mycript.sh $michael ), it should use that value instead of $1 and provide the output via echo $variable1? In my case that is not working.
Can you please explain how should I configure positional parameter inside the variable?
I tried the this solution, but that did not help:
#!/bin/bash
var1=$( linux command to list ldap users | grep user: $1)
echo $var1
If you invoke your script as ./mycript.sh $michael and the variable michael is not set in the shell, they you are calling your script with no arguments. Perhaps you meant ./myscript.h michael to pass the literal string michael as the first argument. A good way to protect against this sort of error in your script is to write:
#!/bin/bash
var1=$( linux command to list ldap users | grep "user: ${1:?}")
echo "$var1"
The ${1:?} will expand to $1 if that parameter is non-empty. If it is empty, you'll get an error message.
If you'd like the script to terminate if no values are found by grep, you might want:
var1=$( linux command to list ldap users | grep "user: ${1:?}") || exit
But it's probably easier/better to actually validate the arguments and print an error message. (Personally, I find the error message from ${:?} constructs to bit less than ideal.) Something like:
#!/bin/bash
if test $# -lt 1; then echo 'Missing arguments' >&2; exit 1; fi

Iterate through a list using 'while read' using bash

I have a list of IP addresses, and my end goal is to ssh into each one, and reset them one-at-a-time. I was asked to use Linux / Bash, which I am not extremely familiar. My code right now will take the first IP from the list, and connect to it, but it never moves on past that point. I believe the issue is somewhere between the while read oneip3 and do code. Any help is greatly appreciated.
The way I run this script is as follows: (I have a list of IP addresses in a separate text file):
./runscript.txt ip_list.txt
while read oneip3
do
(sleep 5
echo "yes\r"
sleep 3
echo -e "password\r"
sleep 3
echo -e "reset\r"
sleep 3
echo -e "yes\r"
sleep 20
echo -e "\r"
) | ssh -t -t -oHostKeyAlgorithms=+ssh-dss admin#$oneip3
done < $1
You didn't provide SSH argument. So it opens an interactive shell.
It is a good reason to be stuck on the first machine (maybe there is other reason...)
Try this to debug
... | ssh -t -t -oHostKeyAlgorithms=+ssh-dss "admin#$oneip3" pwd
Other remarks in comment about StrictHostKeyChecking seams good too (if you are really concern by security, you can deploy all needed keys by hand firstly)

Bash script runs one command before previous. I want them one after the other

So part of my script is as follows:
ssh user#$remoteServer "
cd ~/a/b/c/;
echo -e 'blah blah'
sleep 1 # Added this just to make sure it waits.
foo=`grep something xyz.log |sed 's/something//g' |sed 's/something-else//g'`
echo $foo > ~/xyz.list
exit "
In my output I see:
grep: xyz.log: No such file or directory
blah blah
Whereas when I ssh to the server, xyz.log does exist within ~/a/b/c/
Why is the grep statement getting executed before the echo statement?
Can someone please help?
The problem here is that your command in backticks is being run locally, not on the remote end of the SSH connection. Thus, it runs before you've even connected to the remote system at all! (This is true for all expansions that run in double-quotes, so the $foo in echo $foo as well).
Use a quoted heredoc to protect your code against local evaluation:
ssh user#$remoteServer bash -s <<'EOF'
cd ~/a/b/c/;
echo -e 'blah blah'
sleep 1 # Added this just to make sure it waits.
foo=`grep something xyz.log |sed 's/something//g' |sed 's/something-else//g'`
echo $foo > ~/xyz.list
exit
EOF
If you want to pass through a variable from the local side, the easy way is with positional parameters:
printf -v varsStr '%q ' "$varOne" "$varTwo"
ssh "user#$remoteServer" "bash -s $varsStr" <<'EOF'
varOne=$1; varTwo=$2 # set as remote variables
echo "Remote value of varOne is $varOne"
echo "Remote value of varTwo is $varTwo"
EOF
[command server] ------> [remote server]
The better way is to create shell script in the "remote server" , and run the command in the "command server" such as :
ssh ${remoteserver} "/bin/bash /foo/foo.sh"
It will solve many problem , the aim is to make things simple but not complex .

Is it possible to make a bash shell script interact with another command line program?

I am using a interactive command line program in a Linux terminal running the bash shell. I have a definite sequence of command that I input to the shell program. The program writes its output to standard output. One of these commands is a 'save' command, that writes the output of the previous command that was run, to a file to disk.
A typical cycle is:
$prog
$$cmdx
$$<some output>
$$save <filename>
$$cmdy
$$<again, some output>
$$save <filename>
$$q
$<back to bash shell>
$ is the bash prompt
$$ is the program's prompt
q is the quit command for prog
prog is such that it appends the output of the previous command to filename
How can I automate this process? I would like to write a shell script that can start this program, and cycle through the steps, feeding it the commands one by one and, and then quitting. I hope the save command works correctly.
If your command doesn't care how fast you give it input, and you don't really need to interact with it, then you can use a heredoc.
Example:
#!/bin/bash
prog <<EOD
cmdx
save filex
cmdy
save filey
q
EOD
If you need branching based on the output of the program, or if your program is at all sensitive to the timing of your commands, then Expect is what you want.
I recommend you use Expect. This tool is designed to automate interactive shell applications.
Where there's a need, there's a way! I think that it's a good bash lesson to see
how process management and ipc works. The best solution is, of course, Expect.
But the real reason is that pipes can be tricky and many commands are designed
to wait for data, meaning that the process will become a zombie for reasons that
bay be difficult to predict. But learning how and why reminds us of what is
going on under the hood.
When two processes engage in a conversation, the danger is that one or both will
try to read data that will never arrive. The rules of engagement have to be
crystal clear. Things like CRLF and character encoding can kill the party.
Luckily, two close partners like a bash script and its child process are
relatively easy to keep in line. The easiest thing to miss is that bash is
launching a child process for just about every thing it does. If you can make it
work with bash, you thoroughly know what you're doing.
The point is that we want to talk to another process. Here's a server:
# a really bad SMTP server
# a hint at courtesy to the client
shopt -s nocasematch
echo "220 $HOSTNAME SMTP [$$]"
while true
do
read
[[ "$REPLY" =~ ^helo\ [^\ ] ]] && break
[[ "$REPLY" =~ ^quit ]] && echo "Later" && exit
echo 503 5.5.1 Nice guys say hello.
done
NAME=`echo "$REPLY" | sed -r -e 's/^helo //i'`
echo 250 Hello there, $NAME
while read
do
[[ "$REPLY" =~ ^mail\ from: ]] && { echo 250 2.1.0 Good guess...; continue; }
[[ "$REPLY" =~ ^rcpt\ to: ]] && { echo 250 2.1.0 Keep trying...; continue; }
[[ "$REPLY" =~ ^quit ]] && { echo Later, $NAME; exit; }
echo 502 5.5.2 Please just QUIT
done
echo Pipe closed, exiting
Now, the script that hopefully does the magic.
# Talk to a subprocess using named pipes
rm -fr A B # don't use old pipes
mkfifo A B
# server will listen to A and send to B
./smtp.sh < A > B &
# If we write to A, the pipe will be closed.
# That doesn't happen when writing to a file handle.
exec 3>A
read < B
echo "$REPLY"
# send an email, so long as response codes look good
while read L
do
echo "> $L"
echo $L > A
read < B
echo $REPLY
[[ "$REPLY" =~ ^2 ]] || break
done <<EOF
HELO me
MAIL FROM: me
RCPT TO: you
DATA
Subject: Nothing
Message
.
EOF
# This is tricky, and the reason sane people use Expect. If we
# send QUIT and then wait on B (ie. cat B) we may have trouble.
# If the server exits, the "Later" response in the pipe might
# disappear, leaving the cat command (and us) waiting for data.
# So, let cat have our STDOUT and move on.
cat B &
# Now, we should wait for the cat process to get going before we
# send the QUIT command. If we don't, the server will exit, the
# pipe will empty and cat will miss its chance to show the
# server's final words.
echo -n > B # also, 'sleep 1' will probably work.
echo "> quit"
echo "quit" > A
# close the file handle
exec 3>&-
rm A B
Notice that we are not simply dumping the SMTP commands on the server. We check
each response code to make sure things are OK. In this case, things will not be
OK and the script will bail.
I use Expect to interact with the shell for switch and router backups. A bash script calls the expect script with the correct variables.
for i in <list of machines> ; do expect_script.sh $i ; exit
This will ssh to each box, run the backup commands, copy out the appropriate files, and then move on to the next box.
For simple use cases you may use a combination of subshell, echo & sleep:
# in Terminal.app
telnet localhost 25
helo localhost
ehlo localhost
quit
(sleep 5; echo "helo localhost"; sleep 5; echo "ehlo localhost"; sleep 5; echo quit ) |
telnet localhost 25
echo "cmdx\nsave\n...etc..." | prog
..?

Resources