I have to run some userdefined commands on remote servers. So I do the following. It works for many commands such as crontab -l, ls, date +%s, etc... However, it doesn't work for ip addr
When I actually ssh INSIDE those server ip addr works fine. But when I execute it using ssh it doesnt.
This is how I execute it.
$ sshpass -p myPassword ssh -q root#127.0.0.1 'ip addr' > $PWD/tmp
$ cat $PWD/tmp
Again, this works for any commands I've tried so far except ip addr.
For ip addr it gives the following output
bash: ip: command not found
So i was wondering why and if there's anything wrong I'm doing....
Also, please don't sugges to use rsync or any other nondefault linux command since the environment I work in does not have them nor do I have the permission to install.
Thank you in advance
This is caused because a non-interactive ssh session does not source your login profile, which amongst other things is setting your PATH variable.
The default path does not contain /sbin, which is the usual location of the ip command.
Related
I wish to run a script on the remote system and then wish to stay there.
Running following script:-
ssh user#remote logs.sh
This do run the script but after that I am back to my host system. i need to stay on remote one. I tried with..
ssh user#remote logs.sh;bash -l
somehow it solves the problem but still not working exactly as a fresh login as the command:-
ssh user#remote
Or it will be better if i could include something in my script that would open the bash terminal in the same directory where the script was running. Please suggest.
Try this:
ssh -t user#remote 'logs.sh; bash -l'
The quotes are needed to pass both commands to ssh. The -t option forces a pseudo-tty allocation.
Discussion
Consider:
ssh user#remote logs.sh;bash -l
When the shell parses this line, it splits it into two commands. The first is:
ssh user#remote logs.sh
This runs logs.sh on the remote machine. The second command is:
bash -l
This opens a login shell on the local machine.
The quotes were added above to prevent the shell from splitting up the commands this way.
I am new to linux and shell scripting. I want to connect to localhost and interact it.
#! /bin/bash
(exec /opt/scripts/run_server.sh)
when i execute this bash script, it starts listening on a port.
Listening on port xxxxx
Now i want to issue this command "telnet localhost xxxxx"
I tried something like this:
#! /bin/bash
(exec /opt/opencog/scripts/run_server.sh)&&
telnet localhost xxxxx
It is still listening on the port. But i think second command is not running. I expect another window showing that it is being connected like this.
vishnu#xps15:~$ telnet localhost xxxx
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
server>
The reason why i executing these as a script is that, automatically in the server i need to carry out some process by issuing certain commands like this "scm" "parse" etc.....
vishnu#xps15:~$ telnet localhost xxxx
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
server>scm
Entering scheme shell; use ^D or a single . on a line by itself to exit.
guile> (parse "i eat apple")
I have lots of text coming. Manually i cant issue this parse command for each and every sentence. so i want to automate. So i need to write a script for connecting to the server and interacting.
Any guidelines. Finally How to interact/send commands to this guile shell?
One way to login to the linux server as a same or different user and run some command or .sh script (very useful for post-commit hooks or cron jobs) is to use program called sshpass, for example a cron job command or svn post-commit hook would look like this:
/usr/bin/sshpass -p 'password' /usr/bin/ssh
-o StrictHostKeyChecking=no -q user#localhost 'any command'
Just replace password with your password, and user with your user, and put command that you need to run as that particular user...
To install sshpass it on ubuntu just type
apt-get install sshpass
Or on CentOs
yum install sshpass
I solved this with the netcat (nc) command.
$ echo "command1\ncommand2\n" | nc localhost xxxxx
I could manually connect to localhost using telnet localhost xxxx and then i can pass commands from shell to localhost like this.
If you need to use telnet, this solution may help you. Otherwise, use ssh, as other answer suggests.
You can use anything that produces output to write lines one by one, followed by "\r\n", and pipe these lines to ncat, e.g.:
echo -e "command1\r\ncommand2\r\n" | ncat localhost 5000
-e option makes echo interpret "\r\n" as special symbols.
Often I face this situation: I sshed into a remote server and ran some programs, and I want to copy their output files back to my local machine. What I do is remember the file path on remote machine, exit the connection, then scp user#remote:filepath .
Obviously this is not optimal. What I'm looking for is a way to let me scp file back to local machine without exiting the connection. I did some searching, almost all results are telling me how to do scp from my local machine, which I already know.
Is this possible? Better still, is it possible without needing to know the IP address of my local machine?
Given that you have an sshd running on your local machine, it's possible and you don't need to know your outgoing IP address. If SSH port forwarding is enabled, you can open a secure tunnel even when you already have an ssh connection opened, and without terminating it.
Assume you have an ssh connection to some server:
local $ ssh user#example.com
Password:
remote $ echo abc > abc.txt # now we have a file here
OK now we need to copy that file back to our local server, and for some reason we don't want to open a new connection. OK, let's get the ssh command line by pressing Enter ~C (Enter, then tilde, then capital C):
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
That's just like the regular -L/R/D options. We'll need -R, so we hit Enter ~C again and type:
ssh> -R 127.0.0.1:2222:127.0.0.1:22
Forwarding port.
Here we forward remote server's port 2222 to local machine's port 22 (and here is where you need the local SSH server to be started on port 22; if it's listening on some other port, use it instead of 22).
Now just run scp on a remote server and copy our file to remote server's port 2222 which is mapped to our local machine's port 22 (where our local sshd is running).
remote $ scp -P2222 abc.txt user#127.0.0.1:
user#127.0.0.1's password:
abc.txt 100% 4 0.0KB/s 00:00
We are done!
remote $ exit
logout
Connection to example.com closed.
local $ cat abc.txt
abc
Tricky, but if you really cannot just run scp from another terminal, could help.
I found this one-liner solution on SU to be a lot more straightforward than the accepted answer. Since it uses an environmental variable for the local IP address, I think that it also satisfies the OP's request to not know it in advance.
based on that, here's a bash function to "DownLoad" a file (i.e. push from SSH session to a set location on the local machine)
function dl(){
scp "$1" ${SSH_CLIENT%% *}:/home/<USER>/Downloads
}
Now I can just call dl somefile.txt while SSH'd into the remote and somefile.txt appears in my local Downloads folder.
extras:
I use rsa keys (ssh-copy-id) to get around password prompt
I found this trick to prevent the local bashrc from being sourced on the scp call
Note: this requires SSH access to local machine from remote (is this often the case for anyone?)
The other answers are pretty good and most users should be able to work with them. However, I found the accepted answer a tad cumbersome and others not flexible enough. A VPN server in between was also causing trouble for me with figuring out IP addresses.
So, the workaround I use is to generate the required scp command on the remote system using the following function in my .bashrc file:
function getCopyCommand {
echo "scp user#remote:$(pwd)/$1 ."
}
I find rsync to be more useful if the local system is almost a mirror of the remote server (including the username) and I require to copy the directory structure also.
function getCopyCommand {
echo "rsync -rvPR user#remote:$(pwd)/$1 /"
}
The generated scp or rsync command is then simply pasted on my local terminal to retrieve the file.
You would need a local ssh server running in your machine, then you can just:
scp [-r] local_content your_local_user#your_local_machine_ip:
Anyway, you don't need to close your remote connection to make a remote copy, just open another terminal and run scp there.
On your local computer:
scp root#remotemachine_name_or_IP:/complete_path_to_file /local_path
I have two questions:
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
The remote machines are VMs created on the run and I just have their IPs. So, I cant place a script file beforehand in those machines and execute them from my machine.
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
You can do this with ssh, for example:
#!/bin/bash
USERNAME=someUser
HOSTS="host1 host2 host3"
SCRIPT="pwd; ls"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
done
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
You can add the StrictHostKeyChecking=no option to ssh:
ssh -o StrictHostKeyChecking=no -l username hostname "pwd; ls"
This will disable the host key check and automatically add the host key to the list of known hosts. If you do not want to have the host added to the known hosts file, add the option -o UserKnownHostsFile=/dev/null.
Note that this disables certain security checks, for example protection against man-in-the-middle attack. It should therefore not be applied in a security sensitive environment.
Install sshpass using, apt-get install sshpass then edit the script and put your linux machines IPs, usernames and password in respective order. After that run that script. Thats it ! This script will install VLC in all systems.
#!/bin/bash
SCRIPT="cd Desktop; pwd; echo -e 'PASSWORD' | sudo -S apt-get install vlc"
HOSTS=("192.168.1.121" "192.168.1.122" "192.168.1.123")
USERNAMES=("username1" "username2" "username3")
PASSWORDS=("password1" "password2" "password3")
for i in ${!HOSTS[*]} ; do
echo ${HOSTS[i]}
SCR=${SCRIPT/PASSWORD/${PASSWORDS[i]}}
sshpass -p ${PASSWORDS[i]} ssh -l ${USERNAMES[i]} ${HOSTS[i]} "${SCR}"
done
This work for me.
Syntax : ssh -i pemfile.pem user_name#ip_address 'command_1 ; command 2; command 3'
#! /bin/bash
echo "########### connecting to server and run commands in sequence ###########"
ssh -i ~/.ssh/ec2_instance.pem ubuntu#ip_address 'touch a.txt; touch b.txt; sudo systemctl status tomcat.service'
There are a number of ways to handle this.
My favorite way is to install http://pamsshagentauth.sourceforge.net/ on the remote systems and also your own public key. (Figure out a way to get these installed on the VM, somehow you got an entire Unix system installed, what's a couple more files?)
With your ssh agent forwarded, you can now log in to every system without a password.
And even better, that pam module will authenticate for sudo with your ssh key pair so you can run with root (or any other user's) rights as needed.
You don't need to worry about the host key interaction. If the input is not a terminal then ssh will just limit your ability to forward agents and authenticate with passwords.
You should also look into packages like Capistrano. Definitely look around that site; it has an introduction to remote scripting.
Individual script lines might look something like this:
ssh remote-system-name command arguments ... # so, for exmaple,
ssh target.mycorp.net sudo puppet apply
The accepted answer sshes to machines sequentially. In case you want to ssh to multiple machines and run some long-running commands like scp concurrently on them, run the ssh command as a background process.
#!/bin/bash
username="user"
servers=("srv-001" "srv-002" "srv-002" "srv-003");
script="pwd;"
for s in "${servers[#]}"; do
echo "sshing ${username}#${s} to run ${script}"
(ssh ${username}#${s} ${script})& # Run in background
done
wait # If removed, you can run some other script here
If you are able to write Perl code, then you should consider using Net::OpenSSH::Parallel.
You would be able to describe the actions that have to be run in every host in a declarative manner and the module will take care of all the scary details. Running commands through sudo is also supported.
For this kind of tasks, I repeatedly use Ansible which allows to duplicate coherently bash scripts in several containets or VM. Ansible (more precisely Red Hat) now has an additional web interface AWX which is the open-source edition of their commercial Tower.
Ansible: https://www.ansible.com/
AWX:https://github.com/ansible/awx
Ansible Tower: commercial product, you will probably fist explore the free open-source AWX, rather than the 15days free-trail of Tower
There is are multiple ways to execute the commands or script in the multiple remote Linux machines.
One simple & easiest way is via pssh (parallel ssh program)
pssh: is a program for executing ssh in parallel on a number of hosts. It provides features such as sending input to all of the processes, passing a password to ssh, saving the output to files, and timing out.
Example & Usage:
Connect to host1 and host2, and print "hello, world" from each:
pssh -i -H "host1 host2" echo "hello, world"
Run commands via a script on multiple servers:
pssh -h hosts.txt -P -I<./commands.sh
Usage & run a command without checking or saving host keys:
pssh -h hostname_ip.txt -x '-q -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes' -i 'uptime; hostname -f'
If the file hosts.txt has a large number of entries, say 100, then the parallelism option may also be set to 100 to ensure that the commands are run concurrently:
pssh -i -h hosts.txt -p 100 -t 0 sleep 10000
Options:
-I: Read input and sends to each ssh process.
-P: Tells pssh to display output as it arrives.
-h: Reads the host's file.
-H : [user#]host[:port] for single-host.
-i: Display standard output and standard error as each host completes
-x args: Passes extra SSH command-line arguments
-o option: Can be used to give options in the format used in the configuration file.(/etc/ssh/ssh_config) (~/.ssh/config)
-p parallelism: Use the given number as the maximum number of concurrent connections
-q Quiet mode: Causes most warning and diagnostic messages to be suppressed.
-t: Make connections time out after the given number of seconds. 0 means pssh will not timeout any connections
When ssh'ing to the remote machine, how to handle when it prompts for
RSA fingerprint authentication.
Disable the StrictHostKeyChecking to handle the RSA authentication prompt.
-o StrictHostKeyChecking=no
Source: man pssh
This worked for me. I made a function. Put this in your shell script:
sshcmd(){
ssh $1#$2 $3
}
sshcmd USER HOST COMMAND
If you have multiple machines that you want to do the same command on you would repeat that line with a semi colon. For example, if you have two machines you would do this:
sshcmd USER HOST COMMAND ; sshcmd USER HOST COMMAND
Replace USER with the user of the computer. Replace HOST with the name of the computer. Replace COMMAND with the command you want to do on the computer.
Hope this helps!
You can follow this approach :
Connect to remote machine using Expect Script. If your machine doesn't support expect you can download the same. Writing Expect script is very easy (google to get help on this)
Put all the action which needs to be performed on remote server in a shell script.
Invoke remote shell script from expect script once login is successful.
I'm currently facing a weird problem while executing a command from my bash script.
My script has this command,
ssh IPAddressA -l root "ssh -l root IPAddressB ls"
where IPAddressA & IPAddressB would be replaced by hard coded IP addresses of two machines accessible from each other.
The user would enter the password whenever asked. But, I'm getting this error after I enter the IPAddressA's password.
root#IPAddressA's password:
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
]$
There's a better trick for that..
In ~/.ssh/config add a host entry for IPAddressA, configured like so:
Host IPAddressA
User someguy
ProxyCommand ssh -q someguy#IPAddressB nc -q0 %h 22
The slick thing about this method is that you can scp/sftp to IPAddressB without any weird stuff on your shell command line.
For bonus points, generate yourself a public key-pair and drop the public key on both IPAddressA and IPAddressB in ~/.ssh/authorized_keys. If you don't put a password on it, you won't even be bothered to enter that.
Additionally, if you're trying to get access to a remote LAN that only has a single entry point - SSH can actually act as a VPN client, bridging you through the proxy host. Of course, the remote end needs to support tap/tun devices (as does your local machine)... But if it's all there already.. super painless mechanism to bridge.
When the inner ssh password is prompted, there's no interactive keyboard available. You can get what you want with ssh tunneling.
ssh root#IPAddressA -L2222:IPAddressB:22 -Nf
ssh root#localhost -p2222
The first line open a tunnel, so your localhost 2222 port points to IPAddressB:22 andd bring the ssh process in background (-f) without executing a command (-N)
The second line connects IPAddressB:22 through the new opened tunnel