How do I get the output of executable without using "write" command? - linux

I have a hello world code that is compiled. How do I get the output of the executable in a file instead of printing in the terminal where the program runs. Can it be done without including "write" command in code ?
The executable created is "hello.out" and compiled using "mpif90 hello.f90 -o hello.out"

./hello.out > filename
If you still want to see the output on the terminal as well you can pipe it to tee instead:
./hello.out | tee filename
This will write the output to the file and to the terminal.

Related

BASH Reading prompt from GDB

My intentions are the following. I am debugging an object file compiled with gcc from a .c script. Lets call this compiled script "foo". When I run the command from my terminal on mac:
gdb -q ./foo
I get the an output of:
Reading symbols from ./foo...Reading symbols from /Users/john/Documents....done.
done.
And immediately I get a prompt from the shell looking like so:
(gdb) "Shell waiting for my input command here from keyboard"
At this point I want to automate the input of certain commands like:
break, list, x/x "symbol in .c file", x/s "symbol in .c file" and many more. For this automation I want to use a little bash script, and so far I have the following:
#!/bin/bash
SCRIPT=$1
gstdbuf -oL gdb -q $SCRIPT |
while read -r LINE
do
echo "$LINE"
done
At the point of executing this bash script, I see the following output on my terminal's shell:
Reading symbols from ./foo...Reading symbols from /Users/john/Documents....done.
done.
But I do not see the:
(gdb) "Shell waiting for my input command here from keyboard"
How can I detect this prompt from the gdb process in my shell script in order to be able to automate the commands I want instead of inputting them manually?
Many Thanks!
You can create a file .gdbinit and put the initial commands there. gdb will execute them on startup if you added the following line to $HOME/.gdbinit:
add-auto-load-safe-path /path/to/project/.gdbinit
Now you can place commands into /path/to/project/.gdbinit, like this:
break main
run --foo=bar

What's this mean? $ ./your_program <dino>wilma

I don't understand the meaning of:
$ ./your_program <dino>wilma
I'm learning perl, and I do not understand how to do this. I am using PUTTY.
The $ ./your_program indicates that you should run a program your_program on your shell. It assumes you have Linux. The $ indicates your command prompt.
So if you have a Windows machine and a server or another computer with Linux that you connect to with PuTTY, you need to write your program on that machine.
Then you need to make it executable.
$ chmod u+x your_program
Now you can run it. Running a program that is executable in Linux is done by typing the name of the program into the shell. You just did that with chmod, and maybe with vim or emacs when you created the file. But because your program is not accessible from everywhere, you need to put the ./your_program so the shell knows that you want to run it inside of the current directory. That's what the . is for.
$ ./your_program wilma
The wilma is a command line argument. It will be passed to your program.
You could also run it with the perl interpreter without making it executable.
$ perl your_program wilma
You can name all your Perl programs with .pl at the end so it's easier for you to distinguish what type of file they are.
$ denotes the unix command prompt.
./ is the current path - by convention unix systems don't look for executable programs in the current working directory (the places it looks is defined by the PATH environment variable).
your_program is the name of the file you just created/saved.
The above will only work if your file is set "executable" - chmod u+x your_program. You can alternatively use perl your_program and achieve basically the same result.
<dino means 'open the file "dino" and feed it into this program on the standard input. (STDIN).
>wilma means open the file "wilma"; truncate it, and write the output of this program to this file.
STDIN is a unix concept that's 'standard input' - it can either be 'things you type' or the content of a file or command.
That might not make a lot of sense, but it's all about piping - you can:
cat file | grep someword | sed 's/oneword/anotherword/'
That opens a file ( with cat) filters all the lines containing someword and then does a pattern replacement on it.
cat will "send" file to grep on STDIN.
It seems to be a quotation from chapter 5.6 of Learn Perl, the whole quote is:
...In that way, the user can type a command like this one at the shell
prompt:
$ ./your_program <dino >wilma
That command tells the shell that the program's input should be read
from the file dino, and the output should go to the file wilma. As
long as the program blindly reads its input from STDIN, processes it
(in whatever way we need), and blindly writes its output to STDOUT,
this will work just fine.
http://perl.find-info.ru/perl/027/learnperl4-chp-5-sect-6.html
Perhaps a Chinese translation might be of use to the OP 文海梅:
http://www.biostatistic.net/thread-4903-1-1.html

Linux All Output to a File

Is there any way to tell Linux system put all output(stdout,stderr) to a file?
With out using redirection, pipe or modification the how scrips get called.
Just tell the Linux use a file for output.
for example:
script test1.sh:
#!/bin/bash
echo "Testing 123 "
If i run it like "./test1.sh" (with out redirection or pipe)
i'd like to see "Testing 123" in a file (/tmp/linux_output)
Problem: in the system a binary makes a call to a script and this script call many other scrips. it is not possible to modify each call so If i can modify Linux put "output" to a file i can review the logs.
#!/bin/bash
exec >file 2>&1
echo "Testing 123 "
You can read more about exec here
If you are running the program from a terminal, you can use the command script.
It will open up a sub-shell. Do what you need to do.
It will copy all output to the terminal into a file. When you are done, exit the shell. ^D, or exit.
This does not use redirection or pipes.
You could set your terminal's scrollback buffer to a large number of lines and then see all the output from your commands in the buffer - depending on your terminal window and the options in its menus, there may be an option in there to capture terminal I/O to a file.
Your requirement if taken literally is an impractical one, because it is based in a slight misunderstanding. Fundamentally, to get the output to go in a file, you will have to change something to direct it there - which would violate your literal constraint.
But the practical problem is solvable, because unless explicitly counteracted in the child, the output directions configured in a parent process will be inherited. So you only have to setup the redirection once, using either a shell, or a custom launcher program or intermediary. After that it will be inherited.
So, for example:
cat > test.sh
#/bin/sh
echo "hello on stdout"
rm nosuchfile
./test2.sh
And a child script for it to call
cat > test2.sh
#/bin/sh
echo "hello on stdout from script 2"
rm thisfileisnteither
./nonexistantscript.sh
Run the first script redirecting both stdout and stderr (bash version - and you can do this in many ways such as by writing a C program that redirects its outputs then exec()'s your real program)
./test.sh &> logfile
Now examine the file and see results from stdout and stderr of both parent and child.
cat logfile
hello on stdout
rm: nosuchfile: No such file or directory
hello on stdout from script 2
rm: thisfileisnteither: No such file or directory
./test2.sh: line 4: ./nonexistantscript.sh: No such file or directory
Of course if you really dislike this, you can always always modify the kernel - but again, that is changing something (and a very ungainly solution too).

Redirecting standard error to file and leaving standard output to screen when launching makefile

I am aware that for redirecting standard error and output to file I have to do:
make > & ! output.txt
Note I use ! to overwrite the file. But How can I redirect standard error to file and leave standard output to screen? Or even better having both error and output on file but also output on screen, so I can see how my compiling is progressing?
I tried:
make 2>! output.txt
but it gives me an error.
Note that > it enough to overwrite the file. You can use the tail -f command to see the output on screen if it is redirected to a file:
$(make 1>output.txt 2>error.txt &) && tail -f output.txt error.txt
You can do this simply with pipe into tee command. The following will put both stdout and stderr into a file and also to the terminal:
make |& tee output.txt
Edit
Explanation from GNU Bash manual, section 3.2.2 Pipelines:
If ‘|&’ is used, command1’s standard error, in addition to its
standard output, is connected to command2’s standard input through the
pipe; it is shorthand for 2>&1 |. This implicit redirection of the
standard error to the standard output is performed after any
redirections specified by the command.
You are reading bash/sh documentation and using tcsh. tcsh doesn't have any way to redirect just stderr. You might want to switch to one of the non-csh shells.

pass stdout as file name for command line util?

I'm working with a command line utility that requires passing the name of a file to write output to, e.g.
foo -o output.txt
The only thing it writes to stdout is a message that indicates that it ran successfully. I'd like to be able to pipe everything that is written to output.txt to another command line utility. My motivation is that output.txt will end up being a 40 GB file that I don't need to keep, and I'd rather pipe the streams than work on massive files in a stepwise manner.
Is there any way in this scenario to pipe the real output (i.e. output.txt) to another command? Can I somehow magically pass stdout as the file argument?
Solution 1: Using process substitution
The most convenient way of doing this is by using process substitution. In bash the syntax looks as follows:
foo -o >(other_command)
(Note that this is a bashism. There's similar solutions for other shells, but bottom line is that it's not portable.)
Solution 2: Using named pipes explicitly
You can do the above explicitly / manually as follows:
Create a named pipe using the mkfifo command.
mkfifo my_buf
Launch your other command with that file as input
other_command < my_buf
Execute foo and let it write it's output to my_buf
foo -o my_buf
Solution 3: Using /dev/stdout
You can also use the device file /dev/stdout as follows
foo -o /dev/stdout | other_command
Named pipes work fine, but you have a nicer, more direct syntax available via bash process substitution that has the added benefit of not using a permanent named pipe that must later be deleted (process substitution uses temporary named pipes behind the scenes):
foo -o >(other command)
Also, should you want to pipe the output to your command and also save the output to a file, you can do this:
foo -o >(tee output.txt) | other command
For the sake of making stackoverflow happy let me write a long enough sentence because my proposed solution is only 18 characters long instead of the required 30+
foo -o /dev/stdout
You could use the magic of UNIX and create a named pipe :)
Create the pipe
$ mknod -p mypipe
Start the process that reads from the pipe
$ second-process < mypipe
Start the process, that writes into the pipe
$ foo -o mypipe
foo -o <(cat)
if for some reason you don't have permission to write to /dev/stdout
I use /dev/tty as the output filename, equivalent to using /dev/nul/ when you want to output nothing at all. Then | and you are done.

Resources