I am running MATLAB 2011a under Ubuntu, and I have some C++ functions I execute from the command line such as `./community sample_networks/karate.bin -l -1 -q 0.01 > sample_networks/karateout.txt' These C++ functions produce a text file which I would like to pick up from MATLAB
I have not written these C++ functions and would like to simply have MATLAB pass a string to the command line to be executed so that the text file result can be picked up from MATLAB. I would like to avoid using MEX for the time being.
EDIT (using the system command does not work):
pwd
ans = /home/alex/Documents/MATLAB/MATLABsvnWorkingDir/Bloom/graphAnalysis/analysisAttempt2/functionsDownloaded/BlondelLouvainCPP/Community_BGLL_CPPLinux
system('./community sample_networks/karate.bin -l -1 -q 0.01 > sample_networks/karateout.txt > sample_networks/karateout.txt')
./community: /home/alex/matlab2011a/sys/os/glnx86/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./community)
ans = 1
Looks like you just need to use the system function. This function will launch another executable, and wait until its finished.
Related
I have been using j for a few weeks now and loving it. However, all of my work has been in the ijconsole. Does j provide a way to run .ijs files without using load? Similar to how you can simply run $python my_file.py?
I know for windows there exists jconsole.exe, but for Linux and OSx there doesn't seem to be the same option?
You should be able to run bin/jconsole with the .ijs file as the first command line argument.
Here's an example session, copied out of my terminal:
~/j64-807$ cat ex.ijs
d =: 1+1
~/j64-807$ ./bin/jconsole ex.ijs
d
2
I figured out how to get my desired behavior from
https://www.jsoftware.com/help/user/hashbang.htm
The basic idea is to create a "unix script" that points to jconsole by using
#!/home/fred/j64-807/bin/jconsole
at the top of the .ijs file.
Then, you can echo any output you wish or use ARGV to read input.
Finally, call chmod +x your_script.ijs and run using ./your_script.ijs
You can copy the j interpreter files to new projects on servers etc and call them using bash.
So, a final example would be
#!/home/fred/j64-807/bin/jconsole
echo +/*:0".>,.2}.ARGV
exit''
Which computes the sum of squares of digits from command line arguments
My Python 3.7.1 script generates a fasta file called
pRNA.sites.fasta
Within the same script, I call following system command:
cmd = "weblogo -A DNA < pRNA.sites.fasta > OUT.eps"
os.system(cmd)
print(cmd) #for debugging
I am getting the following error message and debugging message on the command line.
Error: Please provide a multiple sequence alignment
weblogo -A DNA < pRNA.sites.fasta > OUT.eps
"OUT.eps" file is generated but it's emtpy. On the other hand, if I run the following 'weblogo' command from the command line, It works just find. I get proper OUT.eps file.
$ weblogo -A DNA<pRNA.sites.fasta>OUT.eps
I am guessing my syntax for os.system call is wrong. Can you tell me what is wrong with it? Thanks.
Never mind. It turned out to be that I was not closing my file, "pRNA.sites.fasta" before I make system call that uses this file.
I am trying to write an Octave script that I can run as an executable.
I am using octave version 3.6.0. I am running the following script downloaded form here:
#!/usr/local/bin/octave -qf
# An example Octave script
len = input( "What size array do you wish to use for the evaluation: " );
clear a;
tic();
for i=1:len
a(i) = i;
endfor
time1 = toc();
a = [1];
tic();
for i=2:len
a = [a i];
endfor
time2 = toc();
a=zeros( len, 1 );
tic();
for i=1:len
a(i) = i;
endfor
time3 = toc();
printf( "The time taken for method 1 was %.4f seconds\n", time1 );
printf( "The time taken for method 2 was %.4f seconds\n", time2 );
printf( "The time taken for method 3 was %.4f seconds\n", time3 );
However when I run the script on the command line, I get the following error:
'usr/local/bin/octave: invalid option -- '
However, when I type the same command at the command line:
/usr/local/bin/octave -qf
I get the octave command prompt. What am I doing wrong?
I assume you're on some sort of Unix/Linux system. Is the file in "DOS" format, with DOS-style line endings? This could cause problems with how the command is interpreted.
Your shebang line (which, btw, has a space it shouldn't) is calling /usr/local/bin/octave, but the error is coming from /usr/bin/octave. Is that a mistake? If so, you need to copy-and-paste code and errors for things like that. If not, the local version may be a script that calls the binary with an incorrect option when run non-interactively. In particular, it looks like the script (or something, at least) is trying to use a long option (--option), and the binary doesn't support it (so it's interpreting it as a short option).
Firstly the posted script runs fine on my system.
I type nano test.sh, I copy it to the file, I change the first line to be #!/usr/bin/octave -qf.
I press Ctrl-O and Ctrl-X.
I then make the script executable using chmod +x test.sh.
I then run the script using ./test.sh or octave -qf test.sh and it works as expected.
Notes:
Octave on my system is
$ which octave
/usr/bin/octave
$ file /usr/bin/octave
/usr/bin/octave: symbolic link to `octave-3.6.1'
$file /usr/bin/octave-3.6.1
/usr/bin/octave-3.6.1: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x061b0a703928fc22af5ca93ee78346a7f5a0e481, stripped
And on my system /usr/bin and /usr/local/bin are in $PATH
The only way I can generate the error you mention is by changing the first line of the script to
#!/usr/bin/octave - - or #!/usr/bin/octave -q -f.
This gives
$ ./test.sh
/usr/bin/octave: invalid option -- ' '
This means that in the script on your machine the shebang line is incorrect or is being interpreted incorrectly.
Verify that the first line is correct in the script.
Also identify what happens if the line is changed to #!/usr/local/bin/octave -q or to #!/usr/local/bin/octave -f.
For more information on the parsing of shebang line see :
The #! magic, details about the shebang/hash-bang mechanism on various Unix flavours.
how to use multiple arguments with a shebang (i.e. #!)?
Bug in #! processing - One More Time
How and who determines what executes when a Bash-like script is executed as a binary without a shebang?
I guess that running a normal script with shebang is handled with binfmt_script Linux module, which checks a shebang, parses command line and runs designated script interpreter.
But what happens when someone runs a script without a shebang? I've tested the direct execv approach and found out that there's no kernel magic in there - i.e. a file like that:
$ cat target-script
echo Hello
echo "bash: $BASH_VERSION"
echo "zsh: $ZSH_VERSION"
Running compiled C program that does just an execv call yields:
$ cat test-runner.c
void main() {
if (execv("./target-script", 0) == -1)
perror();
}
$ ./test-runner
./target-script: Exec format error
However, if I do the same thing from another shell script, it runs the target script using the same shell interpreter as the original one:
$ cat test-runner.bash
#!/bin/bash
./target-script
$ ./test-runner.bash
Hello
bash: 4.1.0(1)-release
zsh:
If I do the same trick with other shells (for example, Debian's default sh - /bin/dash), it also works:
$ cat test-runner.dash
#!/bin/dash
./target-script
$ ./test-runner.dash
Hello
bash:
zsh:
Mysteriously, it doesn't quite work as expected with zsh and doesn't follow the general scheme. Looks like zsh executed /bin/sh on such files after all:
greycat#burrow-debian ~/z/test-runner $ cat test-runner.zsh
#!/bin/zsh
echo ZSH_VERSION=$ZSH_VERSION
./target-script
greycat#burrow-debian ~/z/test-runner $ ./test-runner.zsh
ZSH_VERSION=4.3.10
Hello
bash:
zsh:
Note that ZSH_VERSION in parent script worked, while ZSH_VERSION in child didn't!
How does a shell (Bash, dash) determines what gets executed when there's no shebang? I've tried to dig up that place in Bash/dash sources, but, alas, looks like I'm kind of lost in there. Can anyone shed some light on the magic that determines whether the target file without shebang should be executed as script or as a binary in Bash/dash? Or may be there is some sort of interaction with kernel / libc and then I'd welcome explanations on how does it work in Linux and FreeBSD kernels / libcs?
Since this happens in dash and dash is simpler, I looked there first.
Seems like exec.c is the place to look, and the relevant functionis are tryexec, which is called from shellexec which is called whenever the shell things a command needs to be executed. And (a simplified version of) the tryexec function is as follows:
STATIC void
tryexec(char *cmd, char **argv, char **envp)
{
char *const path_bshell = _PATH_BSHELL;
repeat:
execve(cmd, argv, envp);
if (cmd != path_bshell && errno == ENOEXEC) {
*argv-- = cmd;
*argv = cmd = path_bshell;
goto repeat;
}
}
So, it simply always replaces the command to execute with the path to itself (_PATH_BSHELL defaults to "/bin/sh") if ENOEXEC occurs. There's really no magic here.
I find that FreeBSD exhibits identical behavior in bash and in its own sh.
The way bash handles this is similar but much more complicated. If you want to look in to it further I recommend reading bash's execute_command.c and looking specifically at execute_shell_script and then shell_execve. The comments are quite descriptive.
(Looks like Sorpigal has covered it but I've already typed this up and it may be of interest.)
According to Section 3.16 of the Unix FAQ, the shell first looks at the magic number (first two bytes of the file). Some numbers indicate a binary executable; #! indicates that the rest of the line should be interpreted as a shebang. Otherwise, the shell tries to run it as a shell script.
Additionally, it seems that csh looks at the first byte, and if it's #, it'll try to run it as a csh script.
If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>