I have a python script that performs an scp operation to transfer files from one Synology DiskStation (running Linux) to several Mac OSX computers on a local network. I have setup RSA private/public key pairs on all the machines involved. If I invoke this python script from the NAS drive as the admin user, then everything works exactly as expected. My NAS drive crontab file specifies the same admin user to run the script in the exact same manner. However, scp fails with an exit status code of 1.
What could cause this behavior?
[update]
Using scp -v (or scp -vv) reports more information. I can also see that it's supplying the correct key and the authentication is working as expected. Now I also notice that it has worked on a few of the OSX machines, but not all of them.
I verified that the .ssh/known_hosts and .ssh/authorized_keys were all in place for admin, but not for root. For some reason, when the python script was run under the crontab, the .ssh/known_hosts that was checked was the root known_hosts instead of the admin user known_hosts. So this challenge was brought up during the scp command and because it wasn't tty, it just checked the root known_hosts file, didn't find the remote host and failed. After adding all the OSX remote hosts to the known_hosts file, it worked smoothly.
I think an alternate (and more secure) solution could have been to install a separate crontab under the admin user, but what I did was just faster.
Related
I used to have ssh connection to my server from bash console on Linux subsystem in Windows 10.
I reinstalled Windows and moved id_rsa, id_rsa.pub and known_hosts to exact the same folder where it was on previous system.
But now ssh dont see keys and ends up with error Permission denied (publickey).
But I still can connect using CMD with those keys so issue is not dependig on key file.
On previous system the ssh keys was stored on path: C:\Users\My_Win10_User_Name\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState\rootfs\home\My_Linux_Subsystem_User_Name\.ssh so I moved keys to this folder.
What steps should be taking to make ssh on Linux subsystem works again with my old keys?
ssh requires permissions to be correct. Your ~/.ssh directory must be 0700, and the files inside must be 0600. You also don't mention your ~/.ssh/authorized_keys file, which must contain your public key file (the contents of id_rsa.pub.) That file, too, must be chmoded to 0600.
I'm newbie to Linux and trying to set up a passphrase-less ssh. I'm following the instructions in this link: http://wiki.hands.com/howto/passphraseless-ssh/.
In the above link, it said:"One often sees people using passphrase-less ssh keys for things like cron jobs that do things like this:"
scp /etc/bind/named.conf* otherdns:/etc/bind/
ssh otherdns /usr/sbin/rndc reload
which is dangerous because the key that's being used here is being offered root write access, when it need not be.
I'm kind of confused by the above commands.
I understand the usage of scp. But for ssh, what does it mean "ssh otherdns /usr/sbin/rndc reload"?
"the key that's being used here is being offered root write access."
Can anyone also help explain this sentence more detail? Based on my understanding, the key is the public key generated by one server and copied
to otherdns. What does it mean "being offered root write access"?
it means to run a command on a remote server.
the syntax is
ssh <remote> <cmd>
so in your case
ssh otherdns /usr/sbin/rndc reload
is basically 4 parts:
ssh: run the ssh executable
otherdns: is the remote server; it's lacking a user information, so the default user (the same as currently logged in; or the one configured in ~/.ssh/config for this remote machine)
/usr/sbin/rndc is a programm on the remote server to be run
reload is an argument to the program to be run on the remote machine
so in plain words, your command means:
run the program /usr/sbin/rndc with the argument reload on the remote machine otherdns
I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.
I am trying to pull a file from another computer into R environment in RStudio on Centos 6
I've tried it in plain R first and when I issue
readLines(pipe( 'ssh root#X.X.X.X "cat /path/somefile.sh"' ))
it correctly asks me for the password of my ssh key and reads the contents.
However if the same command is executed from RStudio all I get is:
ssh_askpass: exec(rpostback-askpass): No such file or directory
Permission denied, please try again.
ssh_askpass: exec(rpostback-askpass): No such file or directory
Permission denied, please try again.
ssh_askpass: exec(rpostback-askpass): No such file or dire
Permission denied (publickey,gssapi-with-mic,password).
I suspect that the reason is because rstudio on centos actually uses rstudio-server user (and gui is provided in a browser). Does anyone know how to properly access ssh'd resources from it ?
UPD: after executing
Sys.setenv(PATH = paste0(Sys.getenv('PATH'), ':/usr/lib/rstudio-server/bin/postback'))
as suggested below it won't output askpass errors, but it still does not work. Now it seems that the console is waiting for the command to execute indefinitely
rpostback-askpass is part of RStudio. It may help to add its location (/usr/lib/rstudio-server/bin/postback on my system) to PATH so that ssh can find it:
Sys.setenv(PATH = paste0(Sys.getenv('PATH'), ':/usr/lib/rstudio-server/bin/postback'))
UPDATE RCurl has scp function for copying files over ssh connection. See this answer for details. If you are running your scripts with RStudio, you can use its API to enter the ssh password interactively with hidden input:
pass <- .rs.askForPassword("password?")
and rstudioapi can help to determine whether the script is launched by RStudio or not.
I am trying to execute a bat file on remote windows machine on cloud from my Linux. The bat files starts selenium server and then my selenium tests are run. I am not able to start selenium RC server on that machine. I tried with Telnet but the problem with it is when telnet session is closed the RC server port is also closed. As my code my code has to start the server so I tried with ANT telnet task and also executed shell script of telnet in both ways the port was closed.
I read about Open SSH, psexec for linux and cygwin. But i am not getting how to use these and will they will solve my problem.
I have tried to start a service which will start the server but in this method i am not getting browser visible all tests are running in background as my script takes screen shot browser visibility is must.
Now my Question is what to use and which will be preferable for my job.
and what ever i choose should be executed by code it may be by shell, ant or php.
Thanks in advance.
Let's go through the various options you mentioned:
psexec: This is pretty much a PC only thing. Plus, you must make sure that newer Windows machines can get through the UAC that are setup by default. UAC is the thing you see all the time on Vista and Windows 7 when you try to do something that requires administrator's privileges. You can try something called winexe which is a Linux program that can do the psexec protocol, but I've had problems getting it to work.
OpenSSH: There are two main flavors of SSH, and Open SSH is the one used by the vast majority of sites. SSH has several advantages over other methods:
SSH is secure: Your network traffic is encrypted.
SSH can be password independent: You can setup SSH to use private/public keys. This way, you don't even have to know the password on the remote server. This makes it more secure since you don't have passwords being stored on various systems. And, in many Windows sites, passwords have to be changed every month or so or the account is locked.
SSH can do more than just execute remote commands: There are two sub-protocols on SSH called SCP and SFTP. These allow you to transfer files between two machines. Since they work over SSH, you get all of the advantages of SSH including encrypted packets, and public/private key protection.
SSH is well implemented in the Unix World: You'll find SSH clients built into Ant, Maven, and other build tools. Programs like CVS, Subversion, and Git can work over SSH connections too. Unfortunately, the Windows World operates in a different space time dimension. To use SSH on a Windows system requires third party software like Cygwin.
Cygwin: Cygwin is sort of an odd beast. It's a layer on top of Windows that allows many of the Unix/GNU libraries to work over Windows. It was originally developed to allow Unix developers to run their software on Windows DOS systems. However, Cygwin now contains a complete Unix like system including tools such as Perl and Python, BASH shell, and many utilities such as an SSH server. Since Cygwin is open source, you can download it for free and run SSH server. Unfortunately, I've had problems with Cygwin's SSH server. Another issue: If you're running programs remotely, you probably want to run them in a Windows environment and not the Cygwin environment.
I recommend that you look at WinSSHD from Bitvise. It's an OpenSSH implementation of the SSH Server, but it's not open source. It's about $100 per license and you need a license on each server. However, it's a robust implementation and has all of the features SSH has to offer.
You can look at CoSSH which is a package of Cygwin utilities and OpenSSH server. This is free and all open source, but if you want an easy way of setting it up, you have to pay for the Advanced Administrator Console. You don't need the Advanced Administrator Console since you can use Cygwin to set everything up, and it comes with a basic console to help.
I prefer to use cygwin and use SSH to then log in to the windows machine to execute commands. Be aware that, by default, cygwin doesn't have OpenSSH installed.
Once you have SSH working on the windows machine you can run a command on it from the Linux machine like this:
ssh user#windowsmachine 'mycommand.exe'
You can also set up ssh authentication keys so that you don't need to enter a password each time.
I've succeeded to run remote command on W2K3 via EXPECT on Debian Buster. Here is the script of mine:
#!/usr/bin/expect
#
# execute the script in the following manner:
#
# <script> <vindoze> <user> <password> <command>
#
#
set timeout 200
set hostname [lindex $argv 0]
set username [lindex $argv 1]
set password [lindex $argv 2]
set command [lindex $argv 3]
spawn telnet $hostname
expect "login:"
send "$username\r"
expect "password:"
send "$password\r"
expect "C:*"
send "dir c:\\tasks\\logs \r"
# send $command
expect "C:*"
send "exit\r\r\r"
Bear in mind that you need to enable TELNET service of the Win machine and also the user which you are authenticated with must be member of TelnetClients built-in Win group. Or as most of the Win LazyMins do - authenticate with Admin user ;)
I use similar "expect" script for automated collecting & backup configuration of CLI enabled network devices like Allied Telesyn, Cisco, Planet etc.
Cheers,
LAZA
Not a very secure way, but if you have a running webserver you can use PHP or ASP to trigger a system command. Just hide thgat script under www.myserver.com/02124309c9867a7616972f52a55db1b4.php or something. And make sure the command are fixed written in the code, not open via parameter ...