This question already has answers here:
how do i start commands in new terminals in BASH script
(2 answers)
Closed 17 days ago.
I am using for loop to SSH multiple hosts
#!/usr/bin/bash
bandit=$(cat /root/Desktop/bandit.txt)
for host in {1..2}
do
echo "inside the loop"
ssh bandit$host#$bandit -p 2220
echo "After the loop"
done
#ssh bandit0#bandit.labs.overthewire.org -p 2220
bandit.txt has the following content " bandit.labs.overthewire.org"
I am getting the SSH prompt but one at a time, say for example First I got "bandit1" host login prompt, and after closing the "bandit1" ssh host I am getting second ssh session for "bandit1"
I would like to get two different terminals for each SSH session.
But there is no such things as "terminal window" in bash (well, there is a tty, yours; but I mean, you can't just open a new window. Bash is not aware that it is running inside a specific program that emulate the behavior of a terminal in a GUI window).
So it can't be as easy as you would think.
Of course, you can choose a terminal emulator, and run it yourself.
For example
for host is {1..2}
do
xterm -e ssh bandit$host#$bandit -p 2220 &
done
maybe what you are looking for, if you have xterm program installed.
Maybe with some additional error checking, something like this -
scr=( /dev/stdout
$(ps -fu $USERNAME |
awk '$4~/^pty/{lst[$4]} END{for (pty in lst) print pty}' ) )
for host in {1..2}; do echo ssh bandit$host#etc... >> /dev/${scr[$host]}; done
There are a lot of variations and kinks to work out though. tty or pty? what is there's no other window open? etc... But with luck it will give you something to work from.
I am working on a script in linux in which i need to SSH to another server.
I am using this syntax ssh user#ip
When doing a manual SSH when we type "ssh user#ip" a few second, a second prompt will show and ask what environment are we choosing ex 1 to 4
(4) - this is the number of the environment i will be choosing.
But when doing ssh user#ip 4 - error received (bash command not found)
Here is the image on what is the prompt when using ssh
Without knowing what happens after you select the environment, its probably something like:
echo 4 | ssh user#ip
Feed 4 to whatever is prompting for environment, then exit.
If you need to stay connected, then something like
echo 4 | ssh -tt user#ip "ksh -l"
ksh -l could be replaced by some other shell or command like vi foo.txt (here when the command exits, the connection closes).
The above is similar to this previous question:
Run ssh and immediately execute command
I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.
I am an R user. I always run programs on multiple computers of campus. For example, I need to run 10 different programs. I need to open PuTTY 10 times to log into the 10 different computers. And submit each of programs to each of 10 computers (their OS is Linux). Is there a way to log in 10 different computers and send them command at same time? I use following command to submit program
nohup Rscript L_1_cc.R > L_1_sh.txt
nohup Rscript L_2_cc.R > L_2_sh.txt
nohup Rscript L_3_cc.R > L_3_sh.txt
First set up ssh so that you can login without entering a password (google for that if you don't know how). Then write a script to ssh to each remote host to run the command. Below is an example.
#!/bin/bash
host_list="host1 host2 host3 host4 host5 host6 host7 host8 host9 host10"
for h in $host_list
do
case $h in
host1)
ssh $h nohup Rscript L_1_cc.R > L_1_sh.txt
;;
host2)
ssh $h nohup Rscript L_2_cc.R > L_2_sh.txt
;;
esac
done
This is a very simplistic example. You can do much better than this (for example, you can put the ".R" and the ".txt" file names into a variable and use that rather than explicitly listing every option in the case).
Based on your topic tags I am assuming you are using ssh to log into the remote machines. Hopefully the machine you are using is *nix based so you can use the following script. If you are on Windows consider cygwin.
First, read this article to set up public key authentication on each remote target: http://www.cyberciti.biz/tips/ssh-public-key-based-authentication-how-to.html
This will prevent ssh from prompting you to input a password each time you log into every target. You can then script the command execution on each target with something like the following:
#!/bin/bash
#kill script if we throw an error code during execution
set -e
#define hosts
hosts=( 127.0.0.1 127.0.0.1 127.0.0.1)
#define associated user names for each host
users=( joe bob steve )
#counter to track iteration for correct user name
j=0
#iterate through each host and ssh into each with user#host combo
for i in ${hosts[*]}
do
#modify ssh command string as necessary to get your script to execute properly
#you could even add commands to transfer the file into which you seem to be dumping your results
ssh ${users[$j]}#$i 'nohup Rscript L_1_cc.R > L_1_sh.txt'
let "j=j+1"
done
#exit no error
exit 0
If you set up the public key authentication, you should just have to execute your script to make every remote host do their thing. You could even look into loading the users/hosts data from file to avoid having to hard code that information into the arrays.
I am having some issues trying to connect through telnet to a mail server.The main problem that I am having is that I need to create a script that logs me to the destination and I can't find a way to echo the password.
What I tried:
telnet host -l username ; echo 'password'
And still it asks for my password.Is there any way to fix this or I am doing something wrong?
First of all, you can use eval:
eval "{ echo user_name; sleep 1; echo pass; sleep 1; echo '?'; sleep 5; }" | telnet host_address
Make sure to replace user_name, pass, ? which is the command you want to run and host_address where your telnet host is listening; for me it is a local IP.
It’s surprisingly easy to script a set of command and pipe them into the telnet application. All you need to do is something like this:
(echo commandname;echo anothercommand) | telnet host_address
The only problem is the nagging login that you have to get through… it doesn’t show up right away. So if you pipe in an “echo admin” and then “echo password,” it will happen too quickly and won’t be sent to the server. The solution? Use the sleep command!
Adding in a couple of sleep 3 commands, to wait three seconds, solves the problem. First we’ll echo the username and password, and then we’ll echo the reboot command, and each time we’ll wait three seconds between. The final command will reboot the server immediately:
(sleep 3;echo admin;sleep 3;echo mypassword;sleep 3;echo system reboot;sleep 3;) | telnet host_address
You can put this into a shell script and run it whenever you want. Or you can add it to your cron like this (on OS X or Linux):
crontab -e
Add this line somewhere:
1 7 * * * (sleep 3;echo admin;sleep 3;echo mypassword;sleep 3;echo system reboot;sleep 3;) | telnet host_address
This will reboot your router at 7:01 AM each morning.
AFAIK, you won't be able to automate telnet that way. But it is still possible - even if it is a very bad idea (I'll elaborate on that later).
First why does your try fail :
you launched a telnet command reading from stdin (I suppose terminal) and writing to stdout and stderr (I suppose also a terminal)
if your telnet is reasonably recent, it tries to protect your authentication and asks your password from /dev/tty (for security reasons)
when that command has ended you write password on your own terminal
What should you do instead :
launch telnet with automatic authentication disable (on mine it is telnet -X SRA)
feed its input with the commands you want to pass
wait some delay before entering input, at least for login and password, because if you don't telnet clear input before reading and discards your inputs
Here is an example that allowed me to telnet to my own machine :
sh << EOF | telnet -X SRA localhost
sleep 2
echo my_user_name
sleep 1
echo my_password
# sleep 1 # looks like it can be removed
echo echo foo and bar
sleep 1
EOF
It correctly logs me into my box, executes echo foo and bar (essential command :-) ) and disconnects
Now why you should never do that :
you write a password in clear text in a script file which is poor security practice
you use telnet to do batch processing when it is not intended to be used that way : the script may not be portable to another telnet version
If you really want to pass command in a batch way to a remote server, you should instead try to use ssh which :
has options to process authentication securely (no password in script, nothing in clear text)
is intended to be used in batch mode as well as interactively
If you cannot use ssh (some sysadmin do not like to have uncontrolled input ssh connections) you could also try to use rsh. It is older, far less secure, but at least was designed for batch usage.
Thanks to Harvix answer, I got knew that there is also expect alternative native for shell, called sexpect. Get it from here. Then create this script (I call it telnetpass):
#!/bin/bash
# This script is used for automatically pass authentication by username and password in telnet prompt
# Its goal is similar as sshpass, but for telnet, so I call it telnetpass
. ~/.private/cisco_pw # should contain PASSWORD variable
export SEXPECT_SOCKFILE=/tmp/sexpect-telnetpass-$$.sock
sexpect spawn telnet $1
sexpect expect -cstring 'Username:'
sexpect send -enter $USER
sexpect expect -cstring 'Password:'
sexpect send -enter $PASSWORD
sexpect interact
Then you can run: telnetpass Host125 and got pass the authentication automatically
Trying 198.51.100.78 ...
Connected to Host125.
Escape character is '^]'.
User Access Verification
Username: ashark
Password:
host-125>
I like this solution more than using sleep commands as suggested in another answers, because sleep solutions sometimes fail.
Have you tried using the expect command ?? You will have to create a script where you identify the 'expected' response from the server e.g. 'Password:' and then supply the password in the script. The following will explain: https://en.wikipedia.org/wiki/Expect - A good example is also shown here: http://en.kioskea.net/faq/4736-shell-script-for-telnet-and-run-commands
Try eval:
eval "{ echo;
sleep 3;
echo $user;
sleep 1;
echo $pass;
sleep 1;
echo '?';
sleep 1; }" | telnet your_host
In this example, my remote command is '?' (help).
The sleeps (maybe not all of them nor these times; trial-error...) are needed to avoid telnet misses some inputs.
The user and password are passed as variables ($user and $pass). Take into account security recommendations to store the password if you are scripting.