Run external program from R expecting user input - linux

I have an external program called GPOPSIM_for_linux that I would like to run from R. The program expects user input in form of the name of a parameter file. Suppose that MyParam.txt is its name.
Issuing printf 'MyParam.txt' | /home/domi89/GPOPSIM/GPOPSIM_for_linux in the shell works fine, but when I try
> cmd <- "printf 'MyParam.txt' | /home/domi89/GPOPSIM/GPOPSIM_for_linux"
> system2(command = shQuote(cmd))
sh: 1: "printf 'MyParam.txt' | /home/domi89/GPOPSIM/GPOPSIM_for_linux": not found
it doesn't work.

I suspect issue is with system2 that requires command and arguments to be separated. While with original system function you can use
system('ls -al')
with system2 syntax is
system2('ls', args = '-al')

I messed up with the working directory...
Also, as hinted by Pafnucy, I need to use system() instead of system2()
It now works:
system("cd ./data; printf 'MyParam.txt' | /home/domi89/GPOPSIM/GPOPSIM_for_linux")

Related

How to execute svn command along with grep on windows?

Trying to execute svn command on windows machine and capture the output for the same.
Code:
import subprocess
cmd = "svn log -l1 https://repo/path/trunk | grep ^r | awk '{print \$3}'"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
'grep' is not recognized as an internal or external command,
operable program or batch file.
I do understand that 'grep' is not windows utility.
Getting error as "grep' is not recognized as an internal or external command,
operable program or batch file."
Is it only limited to execute on Linux?
Can we execute the same on Windows?
Is my code right?
For windows your command will look something like the following
svn log -l1 https://repo/path/trunk | find "string_to_find"
You need to use the find utility in windows to get the same effect as grep.
svn --version | find "ra"
* ra_svn : Module for accessing a repository using the svn network protocol.
* ra_local : Module for accessing a repository on local disk.
* ra_serf : Module for accessing a repository via WebDAV protocol using serf.
Use svn log --search FOO instead of grep-ing the command's output.
grep and awk are certainly available for Windows as well, but there is really no need to install them -- the code is easy to replace with native Python.
import subprocess
p = subprocess.run(["svn", "log", "-l1", "https://repo/path/trunk"],
capture_output=True, text=True)
for line in p.stdout.splitlines():
# grep ^r
if line.startswith('r'):
# awk '{ print $3 }'
print(line.split()[2])
Because we don't need a pipeline, and just run a single static command, we can avoid shell=True.
Because we don't want to do the necessary plumbing (which you forgot anyway) for Popen(), we prefer subprocess.run(). With capture_output=True we conveniently get its output in the resulting object's stdout atrribute; because we expect text output, we pass text=True (in older Python versions you might need to switch to the old, slightly misleading synonym universal_newlines=True).
I guess the intent is to search for the committer in each revision's output, but this will incorrectly grab the third token on any line which starts with an r (so if you have a commit message like "refactored to use Python native code" the code will extract use from that). A better approach altogether is to request machine-readable output from svn and parse that (but it's unfortunately rather clunky XML, so there's another not entirely trivial rabbithole for you). Perhaps as middle ground implement a more specific pattern for finding those lines -- maybe look for a specific number of fields, and static strings where you know where to expect them.
if line.startswith('r'):
fields = line.split()
if len(fields) == 13 and fields[1] == '|' and fields[3] == '|':
print(fields[2])
You could also craft a regular expression to look for a date stamp in the third |-separated field, and the number of changed lines in the fourth.
For the record, a complete commit message from Subversion looks like
------------------------------------------------------------------------
r16110 | tripleee | 2020-10-09 10:41:13 +0300 (Fri, 09 Oct 2020) | 4 lines
refactored to use native Python instead of grep + awk
(which is a useless use of grep anyway; see http://www.iki.fi/era/unix/award.html#grep)

Execute the output of previous command line

I need to execute the result of a previous command, but I don't know how I can process.
I have a first command that returns an instruction to log in to the server and then I want to execute it just after.
my-first-command returns: docker login ...
For example:
> my-first-comnand | execute the result of my-first-command
This should do it I believe.
my-first-command | bash
I use $(!!) for this. As Charles points out, this may not be what everyone wants to do, but it works for me and suits my purpose better than the other answer.
$ find ./ -type f -name "some.sh"
$ $(!!)
!! is a variable that holds the last command, and putting into $( ) makes it get executed.
This is also useful for taking other actions on the output, since $( ) is treated as a variable.
Most handy way is to use backticks `your_command` to execute your sub-command inline and immediately use output in your main command.
Example:
`find ~/Library/Android/sdk/build-tools/* -d 0 | tail -1`/zipalign -f 4 ./app-release-unsigned.apk ./app-release.apk
In this example I firstly find the correct directory from where I will execute zipalign. There could be several directories as in my case (find returns two directories) so I getting last one using tail. And then I'm executing zipalign directly using previous result as path to correct zipalign binary.

Unix: What does cat by itself do?

I saw the line data=$(cat) in a bash script (just declaring an empty variable) and am mystified as to what that could possibly do.
I read the man pages, but it doesn't have an example or explanation of this. Does this capture stdin or something? Any documentation on this?
EDIT: Specifically how the heck does doing data=$(cat) allow for it to run this hook script?
#!/bin/bash
# Runs all executable pre-commit-* hooks and exits after,
# if any of them was not successful.
#
# Based on
# http://osdir.com/ml/git/2009-01/msg00308.html
data=$(cat)
exitcodes=()
hookname=`basename $0`
# Run each hook, passing through STDIN and storing the exit code.
# We don't want to bail at the first failure, as the user might
# then bypass the hooks without knowing about additional issues.
for hook in $GIT_DIR/hooks/$hookname-*; do
test -x "$hook" || continue
echo "$data" | "$hook"
exitcodes+=($?)
done
https://github.com/henrik/dotfiles/blob/master/git_template/hooks/pre-commit
cat will catenate its input to its output.
In the context of the variable capture you posted, the effect is to assign the statement's (or containing script's) standard input to the variable.
The command substitution $(command) will return the command's output; the assignment will assign the substituted string to the variable; and in the absence of a file name argument, cat will read and print standard input.
The Git hook script you found this in captures the commit data from standard input so that it can be repeatedly piped to each hook script separately. You only get one copy of standard input, so if you need it multiple times, you need to capture it somehow. (I would use a temporary file, and quote all file name variables properly; but keeping the data in a variable is certainly okay, especially if you only expect fairly small amounts of input.)
Doing:
t#t:~# temp=$(cat)
hello how
are you?
t#t:~# echo $temp
hello how are you?
(A single Controld on the line by itself following "are you?" terminates the input.)
As manual says
cat - concatenate files and print on the standard output
Also
cat Copy standard input to standard output.
here, cat will concatenate your STDIN into a single string and assign it to variable temp.
Say your bash script script.sh is:
#!/bin/bash
data=$(cat)
Then, the following commands will store the string STR in the variable data:
echo STR | bash script.sh
bash script.sh < <(echo STR)
bash script.sh <<< STR

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

How to check if command is available or existant?

I am developing a console application in C on linux.
Now an optional part of it (its not a requirement) is dependant on a command/binary being available.
If I check with system() I'm getting sh: command not found as unwanted output and it detects it as existent. So how would I check if the command is there?
Not a duplicate of Check if a program exists from a Bash script since I'm working with C, not BASH.
To answer your question about how to discover if the command exists with your code. You can try checking the return value.
int ret = system("ls --version > /dev/null 2>&1"); //The redirect to /dev/null ensures that your program does not produce the output of these commands.
if (ret == 0) {
//The executable was found.
}
You could also use popen, to read the output. Combining that with the whereis and type commands suggested in other answers -
char result[255];
FILE* fp = popen("whereis command", "r");
fgets(result, 255, fp);
//parse result to see the path of the bin if it has been found.
pclose(check);
Or using type:
FILE* fp = popen("type command" , "r");
The result of the type command is a bit harder to parse since it's output varies depending on what you are looking for (binary, alias, function, not found).
You can use stat(2) on Linux(or any POSIX OS) to check for a file's existence.
Use which, you can either check the value returned by system() (0 if found) or the output of the command (no output equal not found):
$ which which
/usr/bin/which
$ echo $?
0
$ which does_t_exist
$ echo $?
1
If you run a shell, the output from "type commandname" will tell you whether commandname is available, and if so, how it is provided (alias, function, path to binary). You can read the documentation for type here: http://ss64.com/bash/type.html
I would just go through the current PATH and see whether you can find it there. That’s what I did recently with an optional part of a program that needed agrep installed. Alternately, if you don’t trust the PATH but have your own list of paths to check instead, use that.
I doubt it’s something that you need to check with the shell for whether it’s a builtin.

Resources