I have a list of servers in a file I'll call "boxes."
I'm running the following:
bastion:~ # while read i j; do ssh -n $i 'echo $(hostname)'; done < boxes
When I run this, I should be getting the host name of every box I'm supposedly SSHing into right? But what I'm getting is my hostname "bastion" over and over as if the command is not SSHing to any box.
Any idea why that might be or what I'm doing wrong here?
Thanks.
EDIT: I couldn't get SSH to work for this task. In the end I swapped the ssh for ansible -m shell.
hostname is enough
my favorite solution with GNU Parallel (much faster) sudo apt-get install parallel
parallel --nonall --slf boxes hostname
you can add --tag and much more https://www.gnu.org/software/parallel/parallel_tutorial.html
I have an Arch server running on a VMWare VM. I connect to it through a Firewall that forwards ssh connections from port X to port 22 on the server. Yesterday, I started receiving the error "Bash: Fork: Resource Temporarily Unavailable". I can log in as root and manage things without problem, but it seems that when I ssh in as the user I normally use, the ssh session is now spawning hundreds of ssh-agent /bin/bash sessions. This is, in turn, using up all the threads and file descriptors (from what I can tell) on the system and making it unusable. The little bit of info I've been able to find thus far tells me that I must have some sort of loop, but this hasn't happened until yesterday, possibly when I ran updates. At this point, I am open to suggestions.
One of your shell initialization files is probably spawning a shell, which, when reading the shell initialization files will spawn a shell, etc.
You mentioned ssh-agent /bin/bash. Putting this in .bashrc will definitely cause problems, as this instructs ssh-agent to spawn bash...
Instead, use something like
if [[ -z "$SSH_AUTH_SOCK" ]]; then
eval $(ssh-agent)
fi
in .bashrc (or .xinitrc or .xsession for systems with graphical logins).
Or possibly (untested):
if [[ -z "$SSH_AUTH_SOCK" ]]; then
ssh-agent /bin/bash
fi
in .bash_profile.
in my case (windows) it was because i was not exiting when done with a shell, so they were not getting disposed.
when done use ctrl+d or type exit to terminate the ssh agent
I am an R user. I always run programs on multiple computers of campus. For example, I need to run 10 different programs. I need to open PuTTY 10 times to log into the 10 different computers. And submit each of programs to each of 10 computers (their OS is Linux). Is there a way to log in 10 different computers and send them command at same time? I use following command to submit program
nohup Rscript L_1_cc.R > L_1_sh.txt
nohup Rscript L_2_cc.R > L_2_sh.txt
nohup Rscript L_3_cc.R > L_3_sh.txt
First set up ssh so that you can login without entering a password (google for that if you don't know how). Then write a script to ssh to each remote host to run the command. Below is an example.
#!/bin/bash
host_list="host1 host2 host3 host4 host5 host6 host7 host8 host9 host10"
for h in $host_list
do
case $h in
host1)
ssh $h nohup Rscript L_1_cc.R > L_1_sh.txt
;;
host2)
ssh $h nohup Rscript L_2_cc.R > L_2_sh.txt
;;
esac
done
This is a very simplistic example. You can do much better than this (for example, you can put the ".R" and the ".txt" file names into a variable and use that rather than explicitly listing every option in the case).
Based on your topic tags I am assuming you are using ssh to log into the remote machines. Hopefully the machine you are using is *nix based so you can use the following script. If you are on Windows consider cygwin.
First, read this article to set up public key authentication on each remote target: http://www.cyberciti.biz/tips/ssh-public-key-based-authentication-how-to.html
This will prevent ssh from prompting you to input a password each time you log into every target. You can then script the command execution on each target with something like the following:
#!/bin/bash
#kill script if we throw an error code during execution
set -e
#define hosts
hosts=( 127.0.0.1 127.0.0.1 127.0.0.1)
#define associated user names for each host
users=( joe bob steve )
#counter to track iteration for correct user name
j=0
#iterate through each host and ssh into each with user#host combo
for i in ${hosts[*]}
do
#modify ssh command string as necessary to get your script to execute properly
#you could even add commands to transfer the file into which you seem to be dumping your results
ssh ${users[$j]}#$i 'nohup Rscript L_1_cc.R > L_1_sh.txt'
let "j=j+1"
done
#exit no error
exit 0
If you set up the public key authentication, you should just have to execute your script to make every remote host do their thing. You could even look into loading the users/hosts data from file to avoid having to hard code that information into the arrays.
I have two questions:
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
The remote machines are VMs created on the run and I just have their IPs. So, I cant place a script file beforehand in those machines and execute them from my machine.
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
You can do this with ssh, for example:
#!/bin/bash
USERNAME=someUser
HOSTS="host1 host2 host3"
SCRIPT="pwd; ls"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
done
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
You can add the StrictHostKeyChecking=no option to ssh:
ssh -o StrictHostKeyChecking=no -l username hostname "pwd; ls"
This will disable the host key check and automatically add the host key to the list of known hosts. If you do not want to have the host added to the known hosts file, add the option -o UserKnownHostsFile=/dev/null.
Note that this disables certain security checks, for example protection against man-in-the-middle attack. It should therefore not be applied in a security sensitive environment.
Install sshpass using, apt-get install sshpass then edit the script and put your linux machines IPs, usernames and password in respective order. After that run that script. Thats it ! This script will install VLC in all systems.
#!/bin/bash
SCRIPT="cd Desktop; pwd; echo -e 'PASSWORD' | sudo -S apt-get install vlc"
HOSTS=("192.168.1.121" "192.168.1.122" "192.168.1.123")
USERNAMES=("username1" "username2" "username3")
PASSWORDS=("password1" "password2" "password3")
for i in ${!HOSTS[*]} ; do
echo ${HOSTS[i]}
SCR=${SCRIPT/PASSWORD/${PASSWORDS[i]}}
sshpass -p ${PASSWORDS[i]} ssh -l ${USERNAMES[i]} ${HOSTS[i]} "${SCR}"
done
This work for me.
Syntax : ssh -i pemfile.pem user_name#ip_address 'command_1 ; command 2; command 3'
#! /bin/bash
echo "########### connecting to server and run commands in sequence ###########"
ssh -i ~/.ssh/ec2_instance.pem ubuntu#ip_address 'touch a.txt; touch b.txt; sudo systemctl status tomcat.service'
There are a number of ways to handle this.
My favorite way is to install http://pamsshagentauth.sourceforge.net/ on the remote systems and also your own public key. (Figure out a way to get these installed on the VM, somehow you got an entire Unix system installed, what's a couple more files?)
With your ssh agent forwarded, you can now log in to every system without a password.
And even better, that pam module will authenticate for sudo with your ssh key pair so you can run with root (or any other user's) rights as needed.
You don't need to worry about the host key interaction. If the input is not a terminal then ssh will just limit your ability to forward agents and authenticate with passwords.
You should also look into packages like Capistrano. Definitely look around that site; it has an introduction to remote scripting.
Individual script lines might look something like this:
ssh remote-system-name command arguments ... # so, for exmaple,
ssh target.mycorp.net sudo puppet apply
The accepted answer sshes to machines sequentially. In case you want to ssh to multiple machines and run some long-running commands like scp concurrently on them, run the ssh command as a background process.
#!/bin/bash
username="user"
servers=("srv-001" "srv-002" "srv-002" "srv-003");
script="pwd;"
for s in "${servers[#]}"; do
echo "sshing ${username}#${s} to run ${script}"
(ssh ${username}#${s} ${script})& # Run in background
done
wait # If removed, you can run some other script here
If you are able to write Perl code, then you should consider using Net::OpenSSH::Parallel.
You would be able to describe the actions that have to be run in every host in a declarative manner and the module will take care of all the scary details. Running commands through sudo is also supported.
For this kind of tasks, I repeatedly use Ansible which allows to duplicate coherently bash scripts in several containets or VM. Ansible (more precisely Red Hat) now has an additional web interface AWX which is the open-source edition of their commercial Tower.
Ansible: https://www.ansible.com/
AWX:https://github.com/ansible/awx
Ansible Tower: commercial product, you will probably fist explore the free open-source AWX, rather than the 15days free-trail of Tower
There is are multiple ways to execute the commands or script in the multiple remote Linux machines.
One simple & easiest way is via pssh (parallel ssh program)
pssh: is a program for executing ssh in parallel on a number of hosts. It provides features such as sending input to all of the processes, passing a password to ssh, saving the output to files, and timing out.
Example & Usage:
Connect to host1 and host2, and print "hello, world" from each:
pssh -i -H "host1 host2" echo "hello, world"
Run commands via a script on multiple servers:
pssh -h hosts.txt -P -I<./commands.sh
Usage & run a command without checking or saving host keys:
pssh -h hostname_ip.txt -x '-q -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes' -i 'uptime; hostname -f'
If the file hosts.txt has a large number of entries, say 100, then the parallelism option may also be set to 100 to ensure that the commands are run concurrently:
pssh -i -h hosts.txt -p 100 -t 0 sleep 10000
Options:
-I: Read input and sends to each ssh process.
-P: Tells pssh to display output as it arrives.
-h: Reads the host's file.
-H : [user#]host[:port] for single-host.
-i: Display standard output and standard error as each host completes
-x args: Passes extra SSH command-line arguments
-o option: Can be used to give options in the format used in the configuration file.(/etc/ssh/ssh_config) (~/.ssh/config)
-p parallelism: Use the given number as the maximum number of concurrent connections
-q Quiet mode: Causes most warning and diagnostic messages to be suppressed.
-t: Make connections time out after the given number of seconds. 0 means pssh will not timeout any connections
When ssh'ing to the remote machine, how to handle when it prompts for
RSA fingerprint authentication.
Disable the StrictHostKeyChecking to handle the RSA authentication prompt.
-o StrictHostKeyChecking=no
Source: man pssh
This worked for me. I made a function. Put this in your shell script:
sshcmd(){
ssh $1#$2 $3
}
sshcmd USER HOST COMMAND
If you have multiple machines that you want to do the same command on you would repeat that line with a semi colon. For example, if you have two machines you would do this:
sshcmd USER HOST COMMAND ; sshcmd USER HOST COMMAND
Replace USER with the user of the computer. Replace HOST with the name of the computer. Replace COMMAND with the command you want to do on the computer.
Hope this helps!
You can follow this approach :
Connect to remote machine using Expect Script. If your machine doesn't support expect you can download the same. Writing Expect script is very easy (google to get help on this)
Put all the action which needs to be performed on remote server in a shell script.
Invoke remote shell script from expect script once login is successful.
I use ssh-agent with password-protected keys on Linux. Every time I log into a certain machine, I do this:
eval `ssh-agent` && ssh-add
This works well enough, but every time I log in and do this, I create another ssh-agent. Once in a while, I will do a killall ssh-agent to reap them. Is there a simple way to reuse the same ssh-agent process across different sessions?
have a look at Keychain. It was written b people in a similar situation to yourself.
Keychain
How much control do you have over this machine? One answer would be to run ssh-agent as a daemon process. Other options are explained on this web page, basically testing to see if the agent is around and then running it if it's not.
To reproduce one of the ideas here:
SSH_ENV="$HOME/.ssh/environment"
function start_agent {
echo "Initialising new SSH agent..."
/usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add;
}
# Source SSH settings, if applicable
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
#ps ${SSH_AGENT_PID} doesn’t work under cywgin
ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
start_agent;
}
else
start_agent;
fi
You can do:
ssh-agent $SHELL
This will cause ssh-agent to exit when the shell exits. They still won't be shared across sessions, but at least they will go away when you do.
Depending on which shell you use, you can set different profiles for login shells and mere regular new shells. In general you want to start ssh-agent for login shells, but not for every subshell. In bash these files would be .bashrc and .bash_login, for example.
Most desktop linuxes these days run ssh-agent for you. You just add your key with ssh-add, and then forward the keys over to remote ssh sessions by running
ssh -A