I configured my authorized_keys as:
from="192.168.1.*",restrict ssh-rsa AAAA***
tty is restricted, but remote user still can run commands on my side:
(Notice lines marked as <<<<<<. These are the commands I have typed)
$ ssh kes
PTY allocation request failed on channel 0
No mail.
asdf <<<<<<
-bash: line 1: asdf: command not found
ls <<<<<<
bin
work
x
: > test <<<<<<
echo "sdf" > test2 <<<<<<
cat test2 <<<<<<
sdf
Why connection still interactive?
The magic is: command="cat ~/t/db/tucha.sql.gz". In combination with restrict it will allow user only this one thing.
I add it to the ~/.ssh/authorized_keys file:
from="192.168.1.*",restrict,command="cat ~/t/db/tucha.sql.gz" ssh-rsa AAAAB3NzaC1yc2EXXXXXXXXX name
When user connect to my host he will get dump of tucha.sql.gz file.
He must connect using command: ssh myhost > local.name.sql.gz
Thus output from from host will be saved into local.name.sql.gz file
Related
I have a task to copy ssh keys from one node to all others in an array. For this, I wrote a simple bash script which copies itself to other nodes and runs it there. What confuses me is the fact that ssh-copy-id works fine on the node where the script is executed manually but it fails if run remotely in an ssh session. Here’s the script:
1 #!/bin/bash
2 # keys-exchange.sh
4 nodes=( main worker-01 worker-02 worker-03 )
6 for n in $( echo "${nodes[#]}" ); do
7 [ $n != $HOSTNAME ] && ssh-copy-id $n
8 done
10 if [ -z $REMOTE ]; then
11 for n in $( echo ${nodes[#]} ); do
12 if [ $n != $HOSTNAME ]; then
13 scp $0 $USER#$n:$0 > /dev/null
14 ssh $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
15 fi
16 done
17 fi
The code in rows 6-8 works fine copying the ssh key to all nodes other than itself. Then, if the REMOTE variable is not set, code in rows 11-16 copies the script to remote nodes (except the node it’s running on, row 12) and runs it there. In row 14, I set and pass the variable REMOTE to skip the code block in rows 10-17 (so the script copies itself only from the source node to others), and the variable HOSTNAME because I found it’s not set in an ssh session. The user’s name and the script path are completely the same on the source node and all destination nodes.
When running on the source node, it works properly asking for a confirmation and the remote host's password. But the script that has just run successfully on the source node fails running in the remote ssh session: ssh-copy-id fails with the following error:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/username/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: ERROR: Host key verification failed.
At that moment, no .ssh/known_hosts file is present on a remote node so I can't do ssh-keygen -R. What am I missing and how to make it work?
ssh $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
Try running ssh with the "-tt" option to request a PTY (pseudo-TTY) for the remote session:
ssh -tt $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
^^^
In the case that you're describing, you're launching ssh on the remote system to connect to a third system. The ssh instance doesn't have a saved copy of the third host's host key. So you'd normally expect ssh to prompt the user whether to continue connecting to the third host. Except that it's not prompting the user--it's just refusing to connect to the third host.
When ssh is invoked with a command to run on the remote system, by default it runs that command without a TTY. In this case, the remote ssh instance sees that it's running without a TTY and runs non-interactively. When it's non-interactive, it doesn't prompt the user for things like passwords and whether to accept a host key or not.
Running the local ssh instance with "-tt" causes it to request a PTY for the remote session. So the remote ssh instance will have a TTY and it will prompt the user--you--for things like host key confirmations.
ssh-copy-id is not copying your keys to remote hosts, it's adding them to ~/.ssh/authorized_keys and when you jump to that remote host there are no keys(or are they?) so there is nothig to copy further. And if ssh-copy-id run without -i option it'll copy(add to authorized_keys) all .pub keys from your ~/.ssh dir wich could be not desired so i suggest to run it like this ssh-copy-id -i $key $host
Be sure that on the destination side, the /etc/ssh/sshd_config is configured to accept the type of key that was generated.
PubkeyAcceptedKeyTypes ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa
I generated the key using ssh-keygen -t rsa -b 4096 ... however, up above the following line did not include the ,ssh-rsa at the end, so even though the ssh-copy-id updated my destination, the sshd did not accept rsa type generated keys. Once i added ,sha-rsa, the did systemctl restart sshd it worked!
I have to create a file functon.txt under a particular directory with hello world in it in lots of machine. This is what I was doing so far manually one by one logging into each box and creating the file. That directory is own by root so I have to make sure that new file is also owned by root user.
david#machineA:~$ sudo su
[sudo] password for david:
root#machineA:/home/david# cd /opt/Potle/ouyt/wert/1
root#machineA:/opt/Potle/ouyt/wert/1# vi functon.txt
root#machineA:/opt/Potle/ouyt/wert/1# ssh david#machineB
david#machineB:~$ sudo su
[sudo] password for david:
root#machineB:/home/david# cd /opt/Potle/ouyt/wert/1
root#machineB:/opt/Potle/ouyt/wert/1# vi functon.txt
root#machineB:/opt/Potle/ouyt/wert/1# ssh david#machineC
.....
Now I have to do this in around 200 machines. Is there any way I can do these things through some script? I am ok typing passwords multiple times if I have to but I don't want to manually login into those box and do all the other steps by hand.
I have a file hosts.txt which contains each machine line by line. I can read this file line by line and do above things but I am not sure how?
This is just one time exercise for me so any easy or simple way should be fine. I can even hardcode my password in the script to do this job. What is the best way to accomplish this task?
After installing Ansible:
ansible -i /path/to/hosts.txt -m ping -u david --ask-pass all
See if you can ping the machines successfully. If it is successful, then try the following with 2 machines (create another txt file with just 2 machines and pass it to -i option). Then you can run this for all machines. If the directory does not exist, the command will fail and you will see the failed machines in summary.
ansible -i /path/to/hosts.txt -m copy -a "src=/path/to/functon.txt dest=/opt/Potle/ouyt/wert/1/functon.txt" -u david --ask-pass --become --become-user root --ask-become-pass all
I didn't test this. So use caution.
-i: input host(s)
-m: module
-a: module arguments
-u: user
--ask-pass: Ask for user password
--become: become another user
--become-user: new user
--ask-become-pass: Ask for become user password
You can use expect to automate SSH copy / SSH login :
#!/usr/bin/expect
set password [lindex $argv 1]
spawn scp -P 22 [lindex $argv 2] [lindex $argv 0]
expect "*password:*"
send -- "$password\r"
send -- "\r"
expect eof
The expect command will wait for the string you give in arguments to be received.
You can iterate over your hosts from hosts.txt and run this script like this for each one :
./create_config.sh david#machineA:/opt/Potle/ouyt/wert/1/ somePassword functon.txt
If you dont have possibility to do SSH copy but only SSH, you can still send command with expect :
#!/usr/bin/expect
set password [lindex $argv 1]
spawn ssh -p 22 [lindex $argv 0]
expect "*password:*"
send -- "$password\r"
send -- "\r"
# expect the command prompt : change this if needed
expect "*$*"
# execute some commands
send -- "echo 'some text to write to some file' > ~/some_file.txt\r"
# exit vm
send -- "exit\r"
expect eof
You can run this with :
./create_config.sh david#machineA somePassword
You could use sshfs: mount a machine, do what you want, unmount and pass to the next.
I am trying to execute a command on remote server using ssh. The command is as
ssh machine -l user "ls"
This command get stuck in between and finally we have to suspend it.
But, executing the command ssh machine -l user works fine and this command makes us connect to remote machine.
Can someone please help in getting the root cause of why the ls on remote server doesn't work by ssh.
EDIT 1 : Here is the ouput after using -v switch with SSH
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug1: Sending command: ls
After printing Sending command: ls the terminal stucks.
I suspect one of two things is happening. First of all, the ssh server may be set to start a particular command for the user, regardless of what command you asked to run. You'd see this behavior if the user was restricted to running SFTP in the usual manner, for example. There are two ways this may be set up:
A ForceCommand directive in the remote server's sshd configuration file.
A directive in the remote user's authorized_keys file for the key being used.
The simplest way to check this would be to log in to the remote server and examine the two files. Alternately, you could start one of these ssh sessions, let it hang, and then run "ps" on the remote server to see what actual processes are running for the user in question.
The other possibility is that the remote user has a line in his .bashrc or other shell startup script which is introducing a wait or else waiting for you to type something. Again, you should start one of these ssh sessions, let it hang, and then run "ps" on the remote server to see what actual processes are running for the user.
Questions:
Does the problem occur on the commandline or within a script?
Are you prompted for your passowrd?
Is there any output? If yes: post it here.
And try
ssh -v user#host "ls"
or
ssh -v -l user host "ls"
and you will get additional output. You can use -v option upto 3 times for higher verbosity.
ssh -vvvl user host "ls"
EDIT:
If I had to debug this, I'd do the following:
go to the target machine, the one you want to 'ssh' to.
log in with the same user you tried with ssh
enter the "ls" command"
It is an unusal thing, but 'ls' is not necessarily what you expect it to be. At the commandline on the target-machine, try
which ls
and then use the output with the fully qualified name for your ssh call, e.g.:
ssh machine -l user "/bin/ls"
Remember, that when excuting a command via ssh you do not automatically have the same path as with a regular login.
Finally, examine your log-files on the target-machine. They usually reside under /var/log (at least under debian).
EDIT2:
On linux machines, I've sometimes experienced a problem with the 'ls' command hanging without any output. This happend to me when there were filesystems in the directory which were in some way 'invalid'. For example if there was an invalid mount of an android mtpfs, the ls command couldn't deal with that and hung.
So try to 'ls' a different directory, e.g.
ssh host -l user "ls /tmp"
If this works, then check from the commandline whether there is a directory or a file whith some invalid state which causes the ls command to fail.
I am logging into remote machine through shell script (by placing ssh command in script).
After ssh command ,The remaining lines of the script are getting executed on the current machine rather than remote machine. How to make the rest of shell script lines execute on remote machine.?
Lets say this is my script
ssh username#ip-address
ls
whoami
----
The rest of lines after ssh should execute on remote machine rather than the current machine. How to achieve this?
One possible solution would be to use a heredoc as in the following example:
$ ssh example.foo.com -- <<##
> ls /etc/
> cat /etc/passwd
> ##
Basically everything between the ## on the first line and the last line will be executed on the remote machine.
You could also use the contents of a file by either reading the contents of the file into a variable:
$ MYVAR=`cat ~/foo.txt`
$ ssh example.foo.com -- <<##
> $MYVAR
> ##
or by simply performing the same action inside the heredoc:
$ ssh example.foo.com -- <<##
> `cat ~/foo.txt`
> ##
Is your login passwordless.
If yes, you can just use pipe to execute the statement on the remote machine
like:
cat myshellscript.sh | ssh blah#blah.com -q
Use ssh command with -t options. For example:
ssh -t myremotehost 'uptime'
name#myremotehost's password:
10:14:14 up 91 days, 21:20, 5 users, load average: 0.20, 0.35, 0.36
ssh -t user#remotehost 'uptime'
user#remotehost's password:
23:35:33 up 2:05, 3 users, load average: 0.79, 0.52, 0.60
Connection to remote closed.
When you specify -t, it opens the terminal in remote machine and execute the command.
My Actual Problem was to auto execute a sh file to another host and return that output in my system. Is this possible??
"" I've to execute file # host2 and get write input # host1 ""
Use SSH:
piskvor#host1$ ssh piskvor#host2 "/home/piskvor/myscript.sh" > myscript.out
What I did here: from host1, I opened a SSH connection as piskvor to host2 and used it to execute /home/piskvor/myscript.sh (assuming it exists and I can run it). The output is redirected to file myscript.out at host1.
If you need password-less login, look into SSH keys.