R Command not recognized when submitted with SSH - linux

I am submitting a shell script on a remote host that in turn submits an R script, but the error R: command not found or Rscript: command not found (depending whether I tried R CMD BATCH or Rscript).
I have tried submitting in the following ways:
ssh <remote-host> exec $HOME/test_script.sh
ssh <remote-host> `sh $HOME/test_script.sh`
The script test_script.sh contains (have tried Rscript as well):
#!/bin/sh
Rscript --no-save --no-restore $HOME/greetme.R
exit 0
The script greetme.R contains only cat("Hello\n").
The reason I am getting flustered is that when I log into the remote-host and submit the original script with sh $HOME/test_script.sh, it runs as intended.
The system specs and R versions for both the local and remote hosts are identical:
> R.version
_
platform x86_64-unknown-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 1.0
year 2014
month 04
day 10
svn rev 65387
language R
version.string R version 3.1.0 (2014-04-10)
nickname Spring Dance
Why is Linux refusing to recognize the commands?
I would prefer solutions using R CMD BATCH or Rscript but if there are known workarounds using littler or %R_TERM% I would like to hear them too.
I used this related question as reference, as well as the documents referenced in the comments: R.exe, Rcmd.exe, Rscript.exe and Rterm.exe: what's the difference?
EDIT for solution:
As #merlin2011 suggested, once I specified the full path in the test_script.sh, everything worked as intended:
#!/bin/sh
/opt/R/bin/Rscript --no-save --no-restore $HOME/greetme.R
exit 0
I got the path also by the provided suggestion:
$ which Rscript
/opt/R/bin/Rscript

It appears that you have a PATH issue, where R is not on your PATH when you try to run the command through ssh.
If you specify the full path to R and Rscript on the remote host, it should resolve the problem.
If you are not sure what the full path is, try logging into the server and running which R to get the path.

Related

error running run file in centOS - bad display

I have a task to install Oracle 11g on a centOS 8 using VM (i'm new to linux / oracle).
I downloaded the Oracle files and unzipped them, then I tried to ./runInstaller but I get an error and this is the full terminal with error:
login as: admin
admin#192.168.163.129's password:
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Thu May 21 09:26:48 2020 from 192.168.163.1
[admin#oracledb ~]$ cd Downloads
[admin#oracledb Downloads]$ cd database
[admin#oracledb database]$ ls
doc linux.x64_11gR2_database_1of2 response runInstaller stage
install linux.x64_11gR2_database_2of2 rpm sshsetup welcome.html
[admin#oracledb database]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 2027 MB Passed
Checking swap space: must be greater than 150 MB. Actual 1759 MB Passed
Checking monitor: must be configured to display at least 256 colors
>>> Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<<
Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,
Continue? (y/n) [n] y
>>> Ignoring required pre-requisite failures. Continuing...
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-05-21_09-43-58AM. Please wait ...
DISPLAY not set. Please set the DISPLAY and try again.
Depending on the Unix Shell, you can use one of the following commands as examples to set the DISPLAY environment variable:
- For csh: % setenv DISPLAY 192.168.1.128:0.0
- For sh, ksh and bash: $ DISPLAY=192.168.1.128:0.0; export DISPLAY
Use the following command to see what shell is being used:
echo $SHELL
Use the following command to view the current DISPLAY environment variable setting:
echo $DISPLAY
- Make sure that client users are authorized to connect to the X Server.
To enable client users to access the X Server, open an xterm, dtterm or xconsole as the user that started the session and type the following command:
% xhost +
To test that the DISPLAY environment variable is set correctly, run a X11 based program that comes with the native operating system such as 'xclock':
% <full path to xclock.. see below>
If you are not able to run xclock successfully, please refer to your PC-X Server or OS vendor for further assistance.
Typical path for xclock: /usr/X11R6/bin/xclock
[admin#oracledb database]$
I am using putty and Xming but I still get this error.
Make sure your putty session is connecting with "x-11 port forwarding"
Be sure after you set this that you scroll back up to 'Session' and then 'save'

WMIC differences in Linux vs Windows

This wmic query (NODE, USER, PASS all desensitised)...
wmic /NODE:10.00.00.1 /LOCALE:MS_409 /PRIVILEGES:ENABLE /TRACE:OFF /INTERACTIVE:OFF /FAILFAST:OFF /USER:domain\my_user /PASSWORD:myPass! /OUTPUT:STDOUT /APPEND:STDOUT /AGGREGATE:ON class StdRegProv CALL EnumKey ^&H80000002,"Software\Microsoft\SystemCertificates\MY\Certificates"
^&H80000002 is the uint32 conversion of HKEY_LOCAL_MACHINE
... runs flawlessly in a CMD prompt in Windows. I can also run it in the context of a node package from my local windows machine with success, I'm going to assume this is because the wmic call is made specifically to the local machine (windows) where it is handled effortlessly. Returning to me a result containing what I require...
res.sNames [ 'BB731A3DD8F089A6D4E59AF9D706...' ]
I created a docker container running Alpine and node where I host an express application. I followed the instructions below to install WMIC on Linux...
https://askubuntu.com/questions/885407/installing-wmic-on-ubuntu-16-04-lts
This installed successfully.
Now when I run the exact same query from a bash prompt in Ubuntu either via my Node app or a direct command, I'm receiving this result:
Garne#MYCOMPUTERNAME MINGW64 ~
$ wmic.exe /NODE:10.00.00.1 /LOCALE:MS_409 /PRIVILEGES:ENABLE /TRACE:OFF /INTERACTIVE:OFF /FAILFAST:OFF /USER:domain\my_user /PASSWORD:myPass! /OUTPUT:STDOUT /APPEND:STDOUT /AGGREGATE:ON class StdRegProv CALL EnumKey ^&H80000002,"Software\Microsoft\SystemCertificates\MY\Certificates"
[1] 426
bash: H80000002,Software\Microsoft\SystemCertificates\MY\Certificates: No such file or directory
Garne#MYCOMPUTERNAME MINGW64 ~ $ ERROR: Description = Access is
denied.
I can't for the life of me work out whether this is due to a string formatting error in Linux vs Windows or whether Linux is running a different variant of wmic that isn't resolving my query correctly?
For anyone wondering, after hours of testing this with very obscure error messages. Make sure you escape absolutely everything in bash style not in a windows fashion.
Note:
\$ instead of ^&
Wrap USER value in ''
Wrap PASSWORD value in ''
References here:
https://manpages.debian.org/buster/bash/bash.1.en.html#QUOTING
$ wmic /NODE:10.23.0.11 /LOCALE:MS_409 /PRIVILEGES:ENABLE /TRACE:OFF /INTERACTIVE:OFF /FAILFAST:OFF /USER:'domain\my_user' /PASSWORD:'myPass!' /OUTPUT:STDOUT /APPEND:STDOUT /AGGREGATE:ON class StdRegProv CALL EnumKey \&H80000002,"Software\Microsoft\SystemCertificates\MY\Certificates"
Executing (StdRegProv)->EnumKey()
Method execution successful.
Out Parameters:
instance of __PARAMETERS
{
ReturnValue = 0;
sNames = {"BB731A3DD8F089A6D4E59AF9D70601F9CBB94A9D"};
};

How to run two shell scripts at startup?

I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.

Error on neo4j server start on arch linux

I have an arch linux setup and installed neo4j through the arch user repository (yaourt -S neo4j), and I'm able to run the web console fine (sudo neo4j console with seemingly normal output and full functionality), however when trying to start the server (sudo neo4j start), I encounter the following error message:
/usr/share/neo4j/bin/utils: line 345: [: -lt: unary operator expected
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=/etc/neo4j/neo4j-server.properties -Djava.util.logging.config.file=/etc/neo4j/logging.properties -Dlog4j.configuration=file:/etc/neo4j/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
Starting Neo4j Server...cat: /run/neo4j/neo4j-service.pid: No such file or directory
process []... waiting for server to be ready. Failed to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
rm: cannot remove ‘/run/neo4j/neo4j-service.pid’: No such file or directory
There's no delay before the error message is printed, so it seems to be something other than the timeout. I'm quite new to neo4j (I worked through a fair bit of the user manual using the web console, but no development or server config experience), so I'm not really sure what else might be relevant. I tried looking through the utils script and the error appears to be where it attempts to su neo4j, but it also seems to proceed to attempt to start the server. I also tried changing the port it's starting on as in this question, but no change. The only log I can find just has this over and over (with appropriate timestamps):
Oct 15, 2014 1:33:49 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
Any help at all would be appreciated!
EDIT:
The line 345 that it's failing on is the end of this snippet:
if [ $UID == 0 ] ; then
OPEN_FILES=`su $NEO4J_USER -c "ulimit -n"`
else
OPEN_FILES=`ulimit -n`
fi
if [ $OPEN_FILES -lt 40000 ]; then
From doing some echo debugging, it seems that su $NEO4J_USER is failing, probably because $NEO4J_USER is set to neo4j, a user that does not exist on my system. I tried setting that to root in one of the config files, but evidently that's not working properly. Arch is a continual learning experience for me, but I've not had to add a new user before to get software working.
The interesting line here is:
/usr/share/neo4j/bin/utils: line 345: [: -lt: unary operator expected
I assume that is caused by a wrong default shell for the neo4j user. What default is currently set for the neo4j system user? Try to switch that to bash. The startup scripts should work nicely with bash.

Cygwin and Apache Pig - a perplexing pseudo-grunt>

I'm trying to get a working installation of Apache Pig on a Windows PC running the Vista operating system, in order to use it as a learning tool; I don't intend to do any serious data processing with Pig on this machine. A single node, single JVM -x local setup is what I want.
I come from a Windows background, so UNIX is the big learning curve for me, but following advice in the online Apache Pig documentation Getting Started, I have installed cygwin and it seems to be working fine. I included the Perl package in my cygwin download and installation, as advised in Getting Started, and that seems to be working fine as well - the /bin directory contains perl.exe and I can access all the Perl documentation.
I then downloaded pig-0.11.1, unpacked it with tar -xzvf pig-0.11.1.tar.gz and spent a few (mostly enjoyable) days using the errors I got when trying pig -x local to study the Bash Reference Manual and go through the pig shell script, which I think I now pretty much understand. Having adjusted calls to the cygwin utility cygpath in this script, so that pig.jar is found and the arguments passed to java.exe remain converted by cygpath to a form that java.exe can understand, I get a grunt prompt. But my whoops of joy have been short-lived.
In fact, I get the same grunt prompt with pig-0.7.0 downloaded, installed and used out-of-the-box, with pig -x local, as RELEASE_NOTES.txt describes, without any tampering with its pig shell script at all. But unfortunately it is the same grunt prompt I get with pig-0.11.1: a curious, pseudo-grunt prompt where the arrow keys can move the cursor all over the prompt, in fact all around the screen, over previous commands given at the dollar prompt even, and the return key (preceded by ;) does nothing but jump the cursor to a new line. Text can be written but not entered, and only ^c and ^\ seem to work - mercifully returning the bash dollar prompt and a little sanity.
From my pig-0.7.0 directory, typing bin/pig -help gives a proper readout:
Apache Pig version 0.7.0 (r941408)<br />
compiled May 05 2010, 11:15:55<br />
USAGE: Pig [options] [-] : Run interactively in grunt shell.</br >
Pig [options] -e[xecute] cmd [cmd ...] : Run cmd(s).<br />
Pig [options] [-f[ile]] file : Run cmds found in file.
options include: ... *etc etc*<br />
From my pig-0.7.0 directory, typing bin/pig -x local results in the following response:
13/04/18 10:37:51 INFO pig.Main: Logging error messages to: C:\cygwin\home\Richard\pig_installation\pig-0.7.0\pig_1366277871311.log<br />
2013-04-18 10:37:51,540 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///<br />
From any directory, since I have set PATH to my pig-0.11.1/bin directory, typing pig -x local results in the following response:
which: no hadoop in (usr/local/bin:/cygdrive/c/Program Files ... *etc etc* .. )<br />
2013-04-18 10:48:59,946 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1 (r1459641) compiled Mar 22 2013, 02:13:53<br />
2013-04-18 10:48:59,946 [main] INFO org.apache.pig.Main - Logging error messages to: C:\cygwin\home\Richard\pig_installation\pig-0.7.0\pig_1366278539943.log<br />
2013-04-18 10:48:59,965 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file C:\Users\Richard/.pigbootup not found<br />
2013-04-18 10:49:01,404 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///<br />
Is this a fatal error or am I just missing a trick? The pig shell script in pig-0.11.1 seems to imply that if hadoop is not found, pig.jar or pig-?.!(*withouthadoop).jar (e.g. pig-0.11.1.jar) will do instead, and the documentation tells me that pig on Windows with cygwin is supported (for -x local but not -x mapreduce). Is this pseudo-grunt> prompt a complete mirage, or does it indicate partial success?
Postscript to the above: I've followed the section Pig Tutorial in Apache's Pig documentation Getting Started, set the environment variables, edited the pig-0.7.0/tutorial/build.xml file as per instructions, run the ant command, created the pigtutorial.tar.gz file, moved it, unzipped it, found pig script 1 and run pig -x local script1-local.pig and IT WORKS! The output file - part-r-00000 - contains no warnings at all, just five columns of records, as expected. A new attempt to get interactive mode, however, with pig -x local, results in the same pseudo-grunt> prompt.

Resources