How can I redirect output in Bash - linux

I work with program which usage is "program input-file output-file".
How can I write the result in STDOUT and don't write it into the output-file.
Thanks.

Put /dev/stdout as the output filename.

Use /dev/fd/1 or /dev/stdout as the output file. Some programs will recognize - to mean stdout, or will even use it automatically if the output file is omitted, but this is up to the individual program (unlike the /dev ones which are system services, although sometimes emulated by shells on systems that lack them).

Related

What does the standalone “-” do in these tar and gzip commands? [duplicate]

Examples:
Create an ISO image and burn it directly to a CD.
mkisofs -V Photos -r /home/vivek/photos | cdrecord -v dev=/dev/dvdrw -
Change to the previous directory.
cd -
Listen on port 12345 and untar data sent to it.
nc -l -p 12345 | tar xvzf -
What is the purpose of the dash and how do I use it?
If you mean the naked - at the end of the tar command, that's common on many commands that want to use a file.
It allows you to specify standard input or output rather than an actual file name.
That's the case for your first and third example. For example, the cdrecord command is taking standard input (the ISO image stream produced by mkisofs) and writing it directly to /dev/dvdrw.
With the cd command, every time you change directory, it stores the directory you came from. If you do cd with the special - "directory name", it uses that remembered directory instead of a real one. You can easily switch between two directories quite quickly by using that.
Other commands may treat - as a different special value.
It's not magic. Some commands interpret - as the user wanting to read from stdin or write to stdout; there is nothing special about it to the shell.
- means exactly what each command wants it to mean. There are several common conventions, and you've seen examples of most of them in other answers, but none of them are 100% universal.
There is nothing magic about the - character as far as the shell is concerned (except that the shell itself, and some of its built-in commands like cd and echo, use it in conventional ways). Some characters, like \, ', and ", are "magical", having special meanings wherever they appear. These are "shell metacharacters". - is not like that.
To see how a given command uses -, read the documentation for that command.
It means to use the program's standard input stream.
In the case of cd, it means something different: change to the prior working directory.
The magic is in the convention. For millennia, people have used '-' to distinguish options from arguments, and have used '-' in a filename to mean either stdin or stdout, as appropriate. Do not underestimate the power of convention!

Does any magic "stdout" file exists? [duplicate]

This question already has answers here:
pass stdout as file name for command line util?
(6 answers)
Closed 5 years ago.
Some utilities can not output to stdout.
Example
util out.txt
It works. But sometimes I want to pipe the output to some other program like:
util out.txt | grep test
Does any magic "stdout" file in linux exists, so when I will replace the out.txt above, it will work redirect the data to stdout pipe?
Note: I know util out.txt && cat out.txt | grep test, so please do not post answers like this.
You could use /dev/stdout. But that won't always work if a program needs to lseek(2) (or mmap(2)) it.
Usually /dev/stdout is a symlink to /proc/self/fd/1 (see proc(5)).
IIRC some version of some programs (probably GNU awk) are handling specifically the /dev/stdout filename (e.g. to be able to work without /proc/ being mounted).
A common, but not universal, convention for program arguments is to consider -, when used as a file name, to represent the stdout (or the stdin). For example, see tar(1) used with -f -.
If you write some utility, I recommend following that - convention when possible and document if stdout needs to be seekable.
Some programs are testing if stdout or stdin is a terminal (e.g. using isatty(3)) to behave differently, e.g. by using ncurses. If you write such a program, I recommend providing a program option to disable that detection.

monitoring and logging /dev/pts/1, under linux

I want to monitor and log a pseudo-terminal device /dev/pts/12 (for debugging purposes), i.e. I want to see what gets written to the terminal and I do not want any process using the terminal to notice.
The obvious solution
cat /dev/pts/1
cat </dev/pts/1
does not work: at best, it seems to capture only keystrokes.
In other words, I'd like to have something analogous to the output of
script -t file.timings typescript ;
but I also need the keystrokes. Reptyr -l $PID is another program that might help: it redirects
the output of a process $PID to another /dev/pts or pipes.
For normal tty's you got screendump or even cat the vcs file but afaik there is no way to do that on a pseudo-terminal, at least the easy way, maybe you should look this:
Conspy
Hope this helps

How to implement pipe under Linux?

I would like my code to handle the output coming from pipe.
for example, ls -l | mycode
how to achieve this under Linux?
Just read from stdin, such as with scanf().
The pipe in Linux/Unix will transfer the output of the first program to the standard input of the second. How you access the standard input will depend on what language you are using.
When you type "ls -l | mycode" into the shell, it is the shell program itself (e.g. bash, zsh) that does all the trickery with pipes. It simply provides the output from ls -l to mycode on standard input. Similarly, anything you write on standard output or error can be redirected or piped by the shell to some other process or file. Exactly how to read and write to those files depends on the language.

Force a shell script to fflush

I was wondering if it was possible to tell bash that all calls to echo or printf should be followed up by a subsequent call to fflush() on stdout/stderr respectively?
A quick and dirty solution would be to write my own printf implementation that did this and use it in lieu of either built in, but it occurred to me that I might not need to.
I'm writing several build scripts that run at once, for debugging needs I really need to see messages that they write in order.
If comands use stdio and are connected to a terminal they'll be flushed per line.
Otherwise you'll need to use something like stdbuf on commands in a pipe line
http://www.pixelbeat.org/programming/stdio_buffering/
tl;dr: instead of printf ... try to put to the script stdbuf -o0 printf .., or stdbuf -oL printf ...
If you force the file to be read, it seems to cause the buffer to flush. These work for me.
Either read the data into a useless variable:
x=$(<$logfile)
Or do a UUOC:
cat $logfile > /dev/null
Maybe "stty raw" can help with some other tricks for end-of-lines handling. AFAIK "raw" mode turns off line based buffering, at least when used for serial port ("stty raw < /dev/ttyS0").

Resources