Issuing Command Via SSH Prompts for Password - linux

I'm having an issue with a script used in a project I inherited that has little to no documentation, and am in the process of documenting everything. I'm trying to debug an issue with one line of a script that is executed on the host machine to call out to a LAN-attached Raspberry Pi with SSH to return some information about the Pi.
We already have working versions of this Raspberry Pi which can execute the script without issue, and I'm not sure what the difference is. When executed on the new one, it prompts for the root password on the Pi, but it has not done this on previous versions of the device. I assume it has something to do with the SSH configuration but I don't know enough about SSH to say what would be the cause.
The line in particular causing the issue is:
ssh -o StrictHostKeyChecking=no {host_name} uname -a &>/dev/null
rc=$? #gets the return value of the remote command so we can read the uname info
{host_name} of course is the actual host name it's connecting to, but I've left that part out for privacy reasons. The script is the same on both machines.
Both Pi devices are the same model and I'm having trouble narrowing down what could cause me to not be able to execute this command. Does anyone know what I need to configure in order to be able to execute this command on the Pi remotely?

Quick fix:
sshpass -p 'password' ssh -o StrictHostKeyChecking=no user#server
Detailed fix:
Most likely you would need to set up Async keys (public/Private) for proper passwordless login. Your command does not show you are using keys so I'm assuming you are not (e.g. -A or -i /path/to/key). Generally root user is blocked (I guess not your problem), I would set up another user for this or change sshd config. You could also Compare the sshd configurations between the Pi Boxes.
See: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md

Okay, so after some more digging around, I discovered that there was a separate .ssh directory under /root that contained an authorized_keys file. After copying this to the new Pi, it worked. I had been wondering all this time if there was a separate config folder for root, but I've never gone digging around /root, so I wasn't aware that it was there.

Related

Linux - shutdown-script with SSH

I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)

SSH and run commands mid-script

I'm writing a bash script to setup a GRE Tunnel, on both local and a remote machine.
How would I be able to (in the middle of the script) be able to have a piece of code that logs into the remote machine, runs the required iptables commands, and logs out, then continues with the setup on the LOCAL machine?
If the client machine is running bash as well, and has the OpenSshClient installed: you can just run ssh user#host yourCommandToRunWithoutPty. This runs the command WITHOUT a pty/tty, which is important is some cases, such as sudo (sudo expects a tty to ask for password).
Because of this, I would suggest adding passwordless access to that command by that user in your server's /etc/sudoers, if (securely!) possible.
If configured correctly, your client should be able to just run ssh user#host sudo iptables --some-iptables-switches.
NOTE When adding passwordless commands to your /etc/sudoers, remember to always be as explicit as possible with your arguments, so no one can abuse arguments unintented to be ran without a sudo password.

Executing a command on remote via ssh doesn't work

I am trying to execute a command on remote server using ssh. The command is as
ssh machine -l user "ls"
This command get stuck in between and finally we have to suspend it.
But, executing the command ssh machine -l user works fine and this command makes us connect to remote machine.
Can someone please help in getting the root cause of why the ls on remote server doesn't work by ssh.
EDIT 1 : Here is the ouput after using -v switch with SSH
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug1: Sending command: ls
After printing Sending command: ls the terminal stucks.
I suspect one of two things is happening. First of all, the ssh server may be set to start a particular command for the user, regardless of what command you asked to run. You'd see this behavior if the user was restricted to running SFTP in the usual manner, for example. There are two ways this may be set up:
A ForceCommand directive in the remote server's sshd configuration file.
A directive in the remote user's authorized_keys file for the key being used.
The simplest way to check this would be to log in to the remote server and examine the two files. Alternately, you could start one of these ssh sessions, let it hang, and then run "ps" on the remote server to see what actual processes are running for the user in question.
The other possibility is that the remote user has a line in his .bashrc or other shell startup script which is introducing a wait or else waiting for you to type something. Again, you should start one of these ssh sessions, let it hang, and then run "ps" on the remote server to see what actual processes are running for the user.
Questions:
Does the problem occur on the commandline or within a script?
Are you prompted for your passowrd?
Is there any output? If yes: post it here.
And try
ssh -v user#host "ls"
or
ssh -v -l user host "ls"
and you will get additional output. You can use -v option upto 3 times for higher verbosity.
ssh -vvvl user host "ls"
EDIT:
If I had to debug this, I'd do the following:
go to the target machine, the one you want to 'ssh' to.
log in with the same user you tried with ssh
enter the "ls" command"
It is an unusal thing, but 'ls' is not necessarily what you expect it to be. At the commandline on the target-machine, try
which ls
and then use the output with the fully qualified name for your ssh call, e.g.:
ssh machine -l user "/bin/ls"
Remember, that when excuting a command via ssh you do not automatically have the same path as with a regular login.
Finally, examine your log-files on the target-machine. They usually reside under /var/log (at least under debian).
EDIT2:
On linux machines, I've sometimes experienced a problem with the 'ls' command hanging without any output. This happend to me when there were filesystems in the directory which were in some way 'invalid'. For example if there was an invalid mount of an android mtpfs, the ls command couldn't deal with that and hung.
So try to 'ls' a different directory, e.g.
ssh host -l user "ls /tmp"
If this works, then check from the commandline whether there is a directory or a file whith some invalid state which causes the ls command to fail.

OpenSSH on Cygwin

I have a Linux box (Ubuntu Server 13.04) which needs to run a job on a Windows 7 box (with cygwin installed) under a specific user's account. I have set up a password-less login to access the Windows machine through openSSH.
The problem I face is the following: when I manually ssh into the Win7 machine and launch the job everything is fine. However, when I launch the job using ssh winuser#winmachine command, I end up connecting to the Windows machine under the privileged sshd user 'cyg_server':
$ whoami
linuxuser
$ ssh winuser#Win7
$ whoami
winuser
$ exit
$ ssh winuser#Win7 "whoami; exit"
cyg_server
>> This should be 'winuser' too.
Why could this be happening? I have tried running ssh-host-config again to no avail. I don't see what parameters might influence this in sshd_config either.
Any help is greatly appreciated!
I had similar issues when I was connecting to a Cygwin machine using SSH. I used to have no problems logging on until one day I noticed that my path wasn't set correctly. I spent ages recreating the configuration files with ssh-host-config only to find my answer in the man page for ssh:
If command is specified, it is executed on the remote host instead of
a login shell.
The problem was the alias I had used to connect to the machine had been changed to connect to a screen session automatically (screen -DR). That meant that if there wasn't already a screen session to attach to, screen was not being run as a child process of a user login shell and not inheriting any of the relevant user environment.
When you provide a command as an argument to ssh, the resulting command is run as a process started by cyg_server. Ensuring the SSH command is being run as part of a login shell should do what you want:
ssh winuser#Win7 "bash -l -c 'whoami; exit'"
Explanation (from the bash man page):
-c string If the -c option is present, then commands are read from string.
-l Make bash act as if it had been invoked as a login shell.

Script to automate two consecutive ssh connections

I know it is possible to write a shell script which passes your hard-coded password to a ssh connection authentication (using expect). However what I need is slightly more complicated.
At my university I have a desktop computer appointed to me. I can connect remotely to this computer by first making a ssh connection with some server, then making another ssh connection from that server to my appointed desktop computer. This goes like:
localuser#localcomputer:~$ ssh -X username#serveraddress
username#serveradress password:
server$ ssh -X username#remotecomputeraddress
username#remotecomputeraddress password:
username#remotecomputer:~>
Is there a way to write a script which could automate the above (i.e. performing two consecutive ssh connections)?
Thanks in advance!
ps: Both the local and the remote computers are running on Linux.
You can do this interactively with:
ssh -t -X username#serveraddress ssh -t -X username#remotecomputeraddress
Note that is not a pipe - the second ssh is the command to run on the connection created by the first ssh. The -t options are necessary to allocate the pseudo-ttys necessary for interaction (password gathering as well as the ultimate goal - an interactive session on the remote system). Wrapping it up with expect left as an exercise for the reader.... ;-)
Bonus points for setting up proper private/public key pairs and ssh-agent so that the passwords aren't necessary (unless, of course, that is disallowed for security reasons).
Yes, you can do this.
Presuming you have your except script in the expect_script:
cat expect_script | ssh -X username#serveraddress sh -s
In this expect_script you must run ssh -X username#remotecomputeraddress.
And of course you can install public keys on the both hosts and use passwordless authentication.
I wrote something to do this with bang paths a while back:
http://stromberg.dnsalias.org/~strombrg/deep-ssh.html
So you'd set up passwordless, passphraseless authentication (or use an agent for the passphrase), like:
http://stromberg.dnsalias.org/~strombrg/ssh-keys.html
And then:
deep-ssh username#serveraddress!username#remotecomputeraddress command
If bash complains about the !, you can just escape it with a backslash.
The old timers will recognize that this is how UUCP paths were specified.

Resources