How to implement pipe under Linux? - linux

I would like my code to handle the output coming from pipe.
for example, ls -l | mycode
how to achieve this under Linux?

Just read from stdin, such as with scanf().

The pipe in Linux/Unix will transfer the output of the first program to the standard input of the second. How you access the standard input will depend on what language you are using.

When you type "ls -l | mycode" into the shell, it is the shell program itself (e.g. bash, zsh) that does all the trickery with pipes. It simply provides the output from ls -l to mycode on standard input. Similarly, anything you write on standard output or error can be redirected or piped by the shell to some other process or file. Exactly how to read and write to those files depends on the language.

Related

Does any magic "stdout" file exists? [duplicate]

This question already has answers here:
pass stdout as file name for command line util?
(6 answers)
Closed 5 years ago.
Some utilities can not output to stdout.
Example
util out.txt
It works. But sometimes I want to pipe the output to some other program like:
util out.txt | grep test
Does any magic "stdout" file in linux exists, so when I will replace the out.txt above, it will work redirect the data to stdout pipe?
Note: I know util out.txt && cat out.txt | grep test, so please do not post answers like this.
You could use /dev/stdout. But that won't always work if a program needs to lseek(2) (or mmap(2)) it.
Usually /dev/stdout is a symlink to /proc/self/fd/1 (see proc(5)).
IIRC some version of some programs (probably GNU awk) are handling specifically the /dev/stdout filename (e.g. to be able to work without /proc/ being mounted).
A common, but not universal, convention for program arguments is to consider -, when used as a file name, to represent the stdout (or the stdin). For example, see tar(1) used with -f -.
If you write some utility, I recommend following that - convention when possible and document if stdout needs to be seekable.
Some programs are testing if stdout or stdin is a terminal (e.g. using isatty(3)) to behave differently, e.g. by using ncurses. If you write such a program, I recommend providing a program option to disable that detection.

Redirect output from subshell and processing through pipe at the same time

tl;dr: need a way to process (with grep) an output inside subshell AND redirect all original output to the main stdout/stderr at the same time. I am looking for shell-independent (!) way.
In detail
There is a proprietary binary which I want to grep for some value
The proprietary binary from time to time might be interactive to ask for a password (depends on the internal logic)
I want to grep the output of the binary AND want being able to enter the password it that is required to proceed further
So the script which is supposed to achieve my task might look like:
#!/bin/sh
user_id=... # some calculated value
list_actions_cmd="./proprietary-binary --actions ${user_id}"
action_item=$(${list_actions_cmd} | grep '^Main:')
Here proprietary-binary might ask for a password through stdin. Since subshell inside $() catches the all output, an end-user won't understand that the list_actions_cmd waits for input. What I want is either to show all output of list_action_cmd AND grepping at the same time or at least caught the keyword that now user will be asked for a password and let him know about that.
Currently what I figured out is to tee the output and grep there:
#!/bin/sh
user_id=... # some calculated value
list_actions_cmd="./proprietary-binary --actions ${user_id}"
$list_actions_cmd 2>&1 | tee /tmp/.proprietary-binary.log
action_item=$(grep "^Main" /tmp/.proprietary-binary.log)
But I wonder is there any elegant shell-independent (not limited to bash which is quite powerful) solution without any intermediate temporary file? Thanks.
What about duplicating output to stderr if executed in a terminal:
item=$(your_command | tee /dev/stderr | grep 'regexp')

Grep command Linux ordering of source string and target string

Grep command syntax is as:
grep "literal_string" filename --> search from string in filename.
So I am assuming the order of is like this
-- keyword(grep) --> string to be searched --> filename/source string and command is interpreted from left to right.
My question is how the commands such as this got processed:
ps -ef | grep rman
Do the order is optional?
How grep is able to know that source is on left and not on right? Or I am missing something here.
When using Unix Pipes, most system commands will take the output from the previous command (to the left of the pipe ) and then pass the output onto the command to the right of the pipe.
The order is important when using grep with or without a pipe.
Thus
grep doberman /file/about/dogs
is the same as
cat /file/about/dogs | grep doberman
See Pipes on http://linuxcommand.org/lts0060.php for some more information.
As step further down from Kyle's answer regarding pipes is that most shell commands read their input from stdin and write their output to stdout. Now many of the commands will also allow you to specify a filename to read from or write too, or allow you to redirect a file to stdin as input and redirect the commands stdout to a file. But regardless how you specify what to read, the command process input from it's stdin and provides output on stdout (errors on stderr). stdin, stdout, and stderr are the designation of file descriptors 0, 1 & 2, respectively.
This basic function is what allows command to be piped together. Where a pipe (represented by the | character) does nothing more that take the stdout from the first command (on left) and direct it to the next commands stdin. As such, yes, the order is important.
Another point to remember is that each piped process is run in its own subshell. Put another way, each | will spawn another shell to run the following command in. This has implications if you are relying on the environment of one process for the next.
Hopefully, these answers will give you a better feel for what is taking place.

How can I redirect output in Bash

I work with program which usage is "program input-file output-file".
How can I write the result in STDOUT and don't write it into the output-file.
Thanks.
Put /dev/stdout as the output filename.
Use /dev/fd/1 or /dev/stdout as the output file. Some programs will recognize - to mean stdout, or will even use it automatically if the output file is omitted, but this is up to the individual program (unlike the /dev ones which are system services, although sometimes emulated by shells on systems that lack them).

pass stdout as file name for command line util?

I'm working with a command line utility that requires passing the name of a file to write output to, e.g.
foo -o output.txt
The only thing it writes to stdout is a message that indicates that it ran successfully. I'd like to be able to pipe everything that is written to output.txt to another command line utility. My motivation is that output.txt will end up being a 40 GB file that I don't need to keep, and I'd rather pipe the streams than work on massive files in a stepwise manner.
Is there any way in this scenario to pipe the real output (i.e. output.txt) to another command? Can I somehow magically pass stdout as the file argument?
Solution 1: Using process substitution
The most convenient way of doing this is by using process substitution. In bash the syntax looks as follows:
foo -o >(other_command)
(Note that this is a bashism. There's similar solutions for other shells, but bottom line is that it's not portable.)
Solution 2: Using named pipes explicitly
You can do the above explicitly / manually as follows:
Create a named pipe using the mkfifo command.
mkfifo my_buf
Launch your other command with that file as input
other_command < my_buf
Execute foo and let it write it's output to my_buf
foo -o my_buf
Solution 3: Using /dev/stdout
You can also use the device file /dev/stdout as follows
foo -o /dev/stdout | other_command
Named pipes work fine, but you have a nicer, more direct syntax available via bash process substitution that has the added benefit of not using a permanent named pipe that must later be deleted (process substitution uses temporary named pipes behind the scenes):
foo -o >(other command)
Also, should you want to pipe the output to your command and also save the output to a file, you can do this:
foo -o >(tee output.txt) | other command
For the sake of making stackoverflow happy let me write a long enough sentence because my proposed solution is only 18 characters long instead of the required 30+
foo -o /dev/stdout
You could use the magic of UNIX and create a named pipe :)
Create the pipe
$ mknod -p mypipe
Start the process that reads from the pipe
$ second-process < mypipe
Start the process, that writes into the pipe
$ foo -o mypipe
foo -o <(cat)
if for some reason you don't have permission to write to /dev/stdout
I use /dev/tty as the output filename, equivalent to using /dev/nul/ when you want to output nothing at all. Then | and you are done.

Resources