Establishing ssh connection from within RStudio on linux - linux

I am trying to pull a file from another computer into R environment in RStudio on Centos 6
I've tried it in plain R first and when I issue
readLines(pipe( 'ssh root#X.X.X.X "cat /path/somefile.sh"' ))
it correctly asks me for the password of my ssh key and reads the contents.
However if the same command is executed from RStudio all I get is:
ssh_askpass: exec(rpostback-askpass): No such file or directory
Permission denied, please try again.
ssh_askpass: exec(rpostback-askpass): No such file or directory
Permission denied, please try again.
ssh_askpass: exec(rpostback-askpass): No such file or dire
Permission denied (publickey,gssapi-with-mic,password).
I suspect that the reason is because rstudio on centos actually uses rstudio-server user (and gui is provided in a browser). Does anyone know how to properly access ssh'd resources from it ?
UPD: after executing
Sys.setenv(PATH = paste0(Sys.getenv('PATH'), ':/usr/lib/rstudio-server/bin/postback'))
as suggested below it won't output askpass errors, but it still does not work. Now it seems that the console is waiting for the command to execute indefinitely

rpostback-askpass is part of RStudio. It may help to add its location (/usr/lib/rstudio-server/bin/postback on my system) to PATH so that ssh can find it:
Sys.setenv(PATH = paste0(Sys.getenv('PATH'), ':/usr/lib/rstudio-server/bin/postback'))
UPDATE RCurl has scp function for copying files over ssh connection. See this answer for details. If you are running your scripts with RStudio, you can use its API to enter the ssh password interactively with hidden input:
pass <- .rs.askForPassword("password?")
and rstudioapi can help to determine whether the script is launched by RStudio or not.

Related

PsExec - The file cannot be accessed by the system

I'm trying to execute a .bat File on a Server in a local network with psexec
I'm currently trying with this command:
.\PsExec.exe -i -u Administrator \\192.168.4.36 -s -d cmd.exe -c "Z:\NX_SystemSetup\test.bat"
The server has no password (it has no internet connection and is running a clean install of Windows Server 2016), so I'm currently not entering one, and when a password is asked I simply press enter, which seems to work. Also, the .bat File currently only opens notepad on execution.
When I enter this command, I get the message "The file cannot be acessed by the system"
I've tried executing it with powershell with administrator privileges (and also without, since I saw another user on Stackoverflow mention that it only worked for them that way) but to no success.
I'm guessing this is a privilege problem, since it "can't be accessed", which would indicate to me that the file was indeed found.
I used net share in a cmd and it says that C:\ on my server is shared.
The file I'm trying to copy is also not in any kind of restricted folder.
Any ideas what else I could try?
EDIT:
I have done a lot more troubleshooting.
On the Server, I went into the firewall settings and opened TCP Port 135 and 445 explicitly, since according to google, PsExec uses these.
Also on the Server, I opened Properties of the "windows" Folder in C: and added an admin$ share, where I gave everyone all rights to the folder (stupid ik but I'm desperate for this to work)
Also played around a bunch more with different commands. Not even .\PsExec.exe \\192.168.4.36 ipconfig seems to work. I still get the same error. "The file cannot be accessed by the system"
This is honestly maddening. There is no known documentation of this error on the internet. Searching explicitly for "File cannot be accessed" still only brings up results for "File cannot be found" and similar.
I'm surely just missing something obvious. Right?
EDIT 2
I also tried adding the domain name in front of the username. I checked the domain by using set user in cmd on the server.
.\PsExec.exe \\192.168.4.16 -u DomainName\Administrator -p ~ -c "C:\Users\UserName\Documents\Mellanox Update.bat"
-p ~
seems to work for the password, so I added that.
I also tried creating a shortcut of the .bat File, and executing it as Administrator, using it instead of the original .bat File. The error stays the same "The File cannot be accessed by the system"
As additional info, the PC I'm trying to send the command from has Windows 10, the Server is running Windows Server 2016
So, the reason for this specific error is as simple and as stupid as it gets.
Turns out I was using the wrong IP. The IP I was using is an IPMI Address, which does not allow for any traffic (other than IPMI related stuff)
I have not yet gotten it to work yet, since I've run into some different errors, but the original question/problem has been resolved.

Cannot Connect to Linux Oracle Databse with Perl Script after connecting with PuTTY

I have the following problem:
I currently connect to one of our Linux servers using PuTTY on my Windows 10 machine. If I use a ‘standard’ PuTTY connection I have no problem: I can log in and run my Perl script to access an Oracle database on the Linux server. However, recently I have set up a new PuTTY connection (I copied the original working copy used above). The only difference from the original is that I have entered the following in the section Connection->SSH->Remote command of the PuTTY configuration window:
cd ../home/code/project1/scripts/perl ; /bin/bash
(I have done this so I arrive directly in the folder containing all my scripts.)
I can still log into the server with no problems and it takes me straight to the folder that contains my Perl scripts. However, when I run the script to access the Oracle database I get the following error:
DBI connect('server1/dbname','username',...) failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at PerlDBFile1.pl line 10.
impossible de se connecter à server1 / dbname at PerlDBFile1.pl line 10, <DATA> line 1.
In addition, if I run the env command on the server the variable $ORACLE_HOME is not listed (If I run the same env command on the server with the standard PuTTY connection the $ORACLE_HOME variable is present.)
Just to note: Running any other Perl script on the server (that does NOT access the Oracle database) through either of the PuTTY sessions I have created works with no problems.
Any help much appreciated.
When you set the remote command in PuTTY, it skips running of .bash_profile that is present in your default $HOME directory. This is why you are getting the error.
To resolve it, either place a copy of .bash_profile in your perl directory, or add a command to execute .bash_profile in remote command
OK, I have the solution!...Thanks to everyone who replied.
Basically, I originally had the command:
cd ../home/code/project1/scripts/perl ; /bin/bash (See original post)
To get it to work I replaced the above with
cd ../home/code/project1/scripts/perl; source ~/.bash_profile; /bin/bash
I also tried:
cd ../home/code/project1/scripts/perl; /bin/bash; source ~/.bash_profile
But that did NOT work.
Hope this helps someone.
Gauss76

How to execute a system command on a remote Linux server using CGI and Perl's Net::OpenSSH module?

EDIT: Read this first: What I was trying to do here is the result of extreme tunnel visioning, the post might be amusing, but not informative. You don't need to SSH into your own server to execute a command, what was I even thinking...
the title pretty much says it all. I want to host a CGI website on a Linux server (Debian, if it matters) and when clicking a button, perform a system command on the server itself. I'm doing this through Perl and it's Net::OpenSSH module.
Here is the problem. I can run the script through the terminal on the server itself, but only if I use sudo. It doesn't matter if the command is simply "ls". Unsurprisingly, when clicking on a button on a website which calls the module, it doesn't work either.
Here is my code:
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
print("Content-type: text/html\n\n");
print("TEST");
my $ssh = Net::OpenSSH->new('localhost', user => 'myusername', password => 'mypassword');
$ssh->system("ls") or die "ERROR: " . $ssh->error;
print("TEST2");
When running it in the terminal using sudo, the script prints out TEST, then lists the folders in my home directory (Desktop, Documents, etc) and finally, TEST2.
When I'm not using sudo, it prints only TEST and after that this error message:
ERROR: unable to establish master SSH connection: the authenticity of
the target host can't be established; the remote host public key is
probably not present on the '~/.shh/known_hosts' file at
opensshtest.pl line 13.
I'm not using SSH keys at all, I'm trying to supply the username and password by hardcoding them into the script.
Also, when opened in a browser, it only prints out the first TEST.
Any help would be appreciated.
It's me again, the guy who posted the question. It's funny how I've spent hours trying to make this work, and stumbled upon a solution maybe an hour after posting the question here. But here it is:
I've added master_opts => [-o => "StrictKeyChecking=no"] as an additional argument to the creation of the Net::OpenSSH object (the line with user, password, etc).

linux could't copy files to the server access denied

I want to copy a war file to my tomcat server. the server is linux
First of all, i totally can do this:
ssh dev#myserver
then i put my password, it works
then i can do this:
cd /bla/bla/tomcat/webapps
now i want to copy the war, i do this:
scp myFile.war dev#myserver:/bla/bla/tomcat/webapps/myFile.war
i put the password,
but then i get this error message:
scp: /bla/bla/tomcat/webapps/myFile.war: Permission denied
what am i doing wrong please?
sidenote, my operation system is mac os
update how can I use sudo to do the copy?
this works for me
scp myFile.war root#myserver:/bla/bla/tomcat/webapps/myFile.war

scp script runs perfectly under user but fails mysteriously under cron job

I have a python script that performs an scp operation to transfer files from one Synology DiskStation (running Linux) to several Mac OSX computers on a local network. I have setup RSA private/public key pairs on all the machines involved. If I invoke this python script from the NAS drive as the admin user, then everything works exactly as expected. My NAS drive crontab file specifies the same admin user to run the script in the exact same manner. However, scp fails with an exit status code of 1.
What could cause this behavior?
[update]
Using scp -v (or scp -vv) reports more information. I can also see that it's supplying the correct key and the authentication is working as expected. Now I also notice that it has worked on a few of the OSX machines, but not all of them.
I verified that the .ssh/known_hosts and .ssh/authorized_keys were all in place for admin, but not for root. For some reason, when the python script was run under the crontab, the .ssh/known_hosts that was checked was the root known_hosts instead of the admin user known_hosts. So this challenge was brought up during the scp command and because it wasn't tty, it just checked the root known_hosts file, didn't find the remote host and failed. After adding all the OSX remote hosts to the known_hosts file, it worked smoothly.
I think an alternate (and more secure) solution could have been to install a separate crontab under the admin user, but what I did was just faster.

Resources