Using pipes in command line arguments - linux

I am working on an assignment on pipes in Linux. I have a program that prints hello world. It takes no argument.But when I run the program and provide command line arguments as ./a ls | wc. I get the output 0 2 11 i.e. the output of hello world is passed through pipe to wc. Can someone please explain this behaviour.

Because your program takes no arguments, the fact that you passed it ls as an argument makes no difference to its output: It's ignoring that argument and still emitting hello world, just as it would if ls had not been passed on its argument list.
Understand that a pipeline sets up a FIFO -- that is, it connects two programs.
./a ls ## this program's output...
wc ## ...is connected to this program's input.
The pipeline is thus not an argument to a; it's an instruction to the shell about how to route the output of a, in this case an instruction to route it into the stdin of wc.

Related

Prevent script running with same arguments twice

We are looking into building a logcheck script that will tail a given log file and email when the given arguments are found. I am having trouble accurately determining if another version of this script is running with at least one of the same arguments against the same file. Script can take the following:
logcheck -i <filename(s)> <searchCriterion> <optionalEmailAddresses>
I have tried to use ps aux with a series of grep, sed, and cut, but it always ends up being more code than the script itself and seldom works very efficiently. Is there an efficient way to tell if another version of this script is running with the same filename and search criteria? A few examples of input:
EX1 .\logcheck -i file1,file2,file3 "foo string 0123" email#address.com
EX2 .\logcheck -s file1 Hello,World,Foo
EX3 .\logcheck -i file3 foo email#address1.com,email#address2.com
In this case 3 should not run because 1 is already running with parameters file3 and foo.
There are many solutions for your problem, I would recommend creating a lock file, with the following format:
arg1Ex1 PID#(Ex1)
arg2Ex1 PID#(Ex1)
arg3Ex1 PID#(Ex1)
arg4Ex1 PID#(Ex1)
arg1Ex2 PID#(Ex2)
arg2Ex2 PID#(Ex2)
arg3Ex2 PID#(Ex2)
arg4Ex2 PID#(Ex2)
when your script starts:
It will search in the file for all the arguments it has received (awk command or grep)
If one of the arguments is present in the list, fetch the process PID (awk 'print $2' for example) to check if it is still running (ps) (double check for concurrency and in case of process ended abnormally previously garbage might remain inside the file)
If the PID is still there, the script will not run
Else append the arguments to the lock file with the current process PID and run the script.
At the end, of the execution you remove the lines that contains the arguments that have been used by the script, or remove all lines with its PID.

Unix: What does cat by itself do?

I saw the line data=$(cat) in a bash script (just declaring an empty variable) and am mystified as to what that could possibly do.
I read the man pages, but it doesn't have an example or explanation of this. Does this capture stdin or something? Any documentation on this?
EDIT: Specifically how the heck does doing data=$(cat) allow for it to run this hook script?
#!/bin/bash
# Runs all executable pre-commit-* hooks and exits after,
# if any of them was not successful.
#
# Based on
# http://osdir.com/ml/git/2009-01/msg00308.html
data=$(cat)
exitcodes=()
hookname=`basename $0`
# Run each hook, passing through STDIN and storing the exit code.
# We don't want to bail at the first failure, as the user might
# then bypass the hooks without knowing about additional issues.
for hook in $GIT_DIR/hooks/$hookname-*; do
test -x "$hook" || continue
echo "$data" | "$hook"
exitcodes+=($?)
done
https://github.com/henrik/dotfiles/blob/master/git_template/hooks/pre-commit
cat will catenate its input to its output.
In the context of the variable capture you posted, the effect is to assign the statement's (or containing script's) standard input to the variable.
The command substitution $(command) will return the command's output; the assignment will assign the substituted string to the variable; and in the absence of a file name argument, cat will read and print standard input.
The Git hook script you found this in captures the commit data from standard input so that it can be repeatedly piped to each hook script separately. You only get one copy of standard input, so if you need it multiple times, you need to capture it somehow. (I would use a temporary file, and quote all file name variables properly; but keeping the data in a variable is certainly okay, especially if you only expect fairly small amounts of input.)
Doing:
t#t:~# temp=$(cat)
hello how
are you?
t#t:~# echo $temp
hello how are you?
(A single Controld on the line by itself following "are you?" terminates the input.)
As manual says
cat - concatenate files and print on the standard output
Also
cat Copy standard input to standard output.
here, cat will concatenate your STDIN into a single string and assign it to variable temp.
Say your bash script script.sh is:
#!/bin/bash
data=$(cat)
Then, the following commands will store the string STR in the variable data:
echo STR | bash script.sh
bash script.sh < <(echo STR)
bash script.sh <<< STR

Shell script to call external program which has user-interface

I have an external program, say a.out, which while running asks for an input parameter, i.e.,
./a.out
Please select either 1 or 2:
this will do something
this will do something else
Then when I enter '1', it will do its job. I don't have the code itself but just binary so can't change it.
I want to write a shell script which runs a.out and also inserts '1' in.
I tried many things including silly things like:
./a.out 1
./a.out << 1
./a.out < 1
etc.
but don't work.
Could you please let me know if there is any way to write such as shell script?
Thanks,
dbm368
I think you just need a pipe. For example:
echo 1 | ./a.out
In general terms a pipe takes whatever the program on the left writes to stdout and redirects to the stdin of the program on the right.

Linux command line / shell terminology

I'm a little confused about the terminology I should use when referring to Linux command line programs and commands.
If I were to execute a command such as:
mkdir testing_dir
Would "testing_dir" be called an 'operand' to the program mkdir, or an 'argument' or a 'parameter'?
Another question I have is, what terminology would you use to describe the following process?
find *.txt | grep a | grep b
Could I say; the output of the "find" program is piped (redirected) to the input of the grep program?
For the first question, it is an "argument". That is why in C programs, the main prototype is int main(int argc, char** argv). argc means argument count, and argv means argument vector.
For the second, it is "piped". (Because it is done with the pipe | charactor, and/or the data is passed from one program, like it is going through a pipe.) Generally, stdout is only called "redirected" when it is sent to a file with the > operator.
Most people would call it the argument.
Yes, it's piped.
Argument or parameter rather than operand , while mkdir is a command , not a operater
Yes, the output is piped to the input parameter of the next command.
There are no operands to commands, operators are having operands. For commands, they are arguments. Since the mkdir is a command,surely the name is an argument.
And about the second one.. The first output is given into the second command as input. Redirects are usually used with the operators > , >> and 2> 2>>

redirecting stdin _and_ stdout to a pipe

I would like to run a program "A", have its output go to the input to another program "B", as well as stdin going to intput of "B". If program "A" closes, I'd like "B" to continue running.
I can redirect A output to B input easily:
./a | ./b
And I can combine stderr into the output if I'd like:
./a 2>&1 | ./b
But I can't figure out how to combine stdin into the output. My guess would be:
./a 0>&1 | ./b
but it doesn't work.
Here's a test that doesn't require us to rewrite up any test programs:
$ echo ls 0>&1 | /bin/sh -i
$ a b info.txt
$
/bin/sh: Cannot set tty process group (No such process)
If possible, I'd like to do this using only bash redirection on the command line (I don't want to write a C program to fork off child processes and do anything complicated everytime I want to do some redirection of stdin to a pipe).
This cannot be done without writing an auxiliary program.
In general, stdin could be a read-only file descriptor (heck, it might refer to read-only file). So you cannot "insert" anything into it.
You will need to write a "helper" program that monitors two file descriptors (say, 0 and 3) in order to read from both and "merge" them. A simple select or poll loop would be sufficient, and you could write it in most scripting languages, but not the shell, I don't think.
Then you can use shell redirection to feed your program's output to descriptor 3 of the "helper".
Since what you want is basically the opposite of "tee", I might call it "eet"...
[edit]
If only you could launch "cat" in the background...
But that will fail because background processes with a controlling terminal cannot read from stdin. So if you could just detach "cat" from its controlling terminal and run it in the background...
On Linux, "setsid cat" should do it, roughly. But (a) I could not get it to work very well and (b) I really do not have time for this today and (c) it is non-standard anyway.
I would just write the helper program.
[edit 2]
OK, this seems to work:
{ seq 5 ; sleep 2 ; seq 5 ; } | /bin/bash -c 'set -m ; setsid cat ; echo HELLO'
The set -m thing forces bash to enable job control, which apparently is needed to prevent the shell from redirecting stdin from /dev/null.
Here, the echo HELLO represents your "program A". The seq commands (with the sleep in the middle) are just to provide some input. And yes, you can pipe this whole thing to process B.
About as ugly and non-portable a solution as you could ask for...
A pipe has two ends. One is for writing, and that which gets written appears in the other end, which is for reading.
It's a pipe, not a T or Y junction.
I don't think your scenario is possible. Having "stdin going to input of" anything doesn't make sense.
If I understand your requirements correctly, you want this set up (ASCII art to the fore):
o----+----->| A |----+---->| B |---->o
| ^
| |
+------------------+
with the additional constraint that if process A closes up shop, process B should be able to continue with the input stream going to B.
This is a non-standard setup, as you realize, and can only be achieved by using an auxilliary program to drive the input to A and B. You end up with some interesting synchronization issues but it will all work remarkably well as long as your messages are short enough.
The plumbing necessary to achieve this is notable - you'll need two pipes, one for the input to A and the other for the input to B, and the output of A will be connected to the input of B as well.
o---->| C |---------->| A |----+---->| B |---->o
| ^
| |
+--------------------------+
Note that C will be writing the data twice, once to A and once to B. Note, too, that the pipe from A to B is the same pipe as the pipe from C to A.
To make the given test case work you have to while ... read from the controlling terminal device /dev/tty inside a sh -c '...' construct.
Note the use of eval (could it be avoided here?) and that multi-line commands on input> will fail.
echo 'ls; export var=myval' | (
stdin="$(</dev/stdin)"
/bin/sh -i -c '
eval "$1";
while IFS="" read -e -r -p "input> " line; do
history -s "${line}"
eval "${line}";
done </dev/tty
' argv0 "${stdin}"
)
input> echo $var
For a similar problem and the use of named pipes see here:
BASH: Best architecture for reading from two input streams
This can't be done exactly as shown, but to perform your example you can make use of cat's ability to join files together:
cat <(echo ls) - | /bin/sh
(You can do -i, but then you'll have to have another process kill the /bin/sh, as your attempts to Ctrl-C and Ctrl-D out will fail.)
This assumes that you want to pass in your piped input and then accept from stdin. You can also make it so that it does something after stdin is done, or on both sides -- but it won't merge input character-by-character or line-by-line.
This seems to do what you want:
$ ( ./a <&-; cat ) | ./b
(It's not clear to me if you want a to get input...this solution sends all input to b)
Of course, in this case the inputs to b are strictly ordered: all of the output of a
is sent to b first, then a terminates, then input goes to b. If you want things
interleaved, try:
$ ( ./a <&- & cat ) | ./b

Resources