Give output of one shell script as input to another using named pipes - linux

I'm new to linux and have been coding some beginenr level shell scripts.
What I want to do is write 2 scripts. The first script will read input from user and the 2nd script will display this input in a loop till it detects an "exit" from the user.
This is how I've coded the 2 shell scripts.
File1.sh:
read var1
echo $var1
File2.sh:
while [ "$var2" != "exit" ]
do
echo $1
read var2
done
Now, I want to use a named pipe to pass the output of File1.sh as input to var1 of File2.sh. I probably will have to modify code in File2.sh so that it will accept argument from a named pipe (as in instead of $1 the input will be from the named pipe), but I'm not at all sure how to go about it.
Giving the output of File1.sh as input to the named pipe can be given as follows:
mkfifo pipe
./File1.sh > pipe
This command keeps asking for input until i break out using ctrl + c. I don't know why that is.
Also how do I make the File2.sh read from this pipe?
will this be correct?
pipe|./File2.sh
I'm very new to linux but I've searched quite a lot online and there isn't even one example of doing this in shell script.

As for your original question, the syntax to read from a named pipe (or any other object in the file system) is
./File2.sh <pipe
Also, your script needs to echo "$var2" with the correct variable name, and double quotes to guard the value against wildcard expansion, variable substitution, etc. See also When to wrap quotes around a shell variable?
The code in your own answer has several new problems.
In File1.sh, you are apparently attempting to declare a variable pipe1, but the assignment syntax is wrong: You cannot have whitespace around the equals sign. Because you never use this variable for anything, this is by and large harmless (but will result in pipe1: command not found which is annoying, of course).
In File2.sh, the while loop's syntax is hopelessly screwed; you dropped the read; the echo still lacks quotes around the variable; and you repeatedly reopen the pipe.
while [ "$input" != "exit" ]
do
read -r input
echo "$input"
done <pipe1
Redirecting the entire loop once is going to be significantly more efficient.
Notice also the option -r to prevent read from performing any parsing of the values it reads. (The ugly default behavior is legacy from the olden days, and cannot be fixed without breaking existing scripts, unfortunately.)

First in File1.sh, echo var1 should be echo $var1.
In order to get input from pipe, try:
./File2.sh < pipe

This is how I solved it.
First mistake I made was to declare the pipe outside the programs. What I was expecting was there is a special way in which a program accepts input parameters of the type "pipe". Which as far as I've figured is wrong.
What you need to do is declare the pipe inside the program. So in the read program what you do is,
For File1.sh:
pipe1=/Documents
mkfifo pipe1
cat > pipe1
This will send the read input from the user to the pipe.
Now, when the pipe is open, it will keep accepting input. You can read from the pipe only when its open. So you need to open a 2nd terminal window to run the 2nd program.
For File2.sh:
while("$input" != "exit")
do
read -r input < pipe1
echo "$input"
done
So whenever you input some string in the first terminal window, it will be reflected in the 2nd terminal window until "exit" is detected.

Related

bash "echo" including ">" in the middle creating file - please explain

When I write:
echo 2*3>5 is a valid inequality
In my bash terminal, a new file named 5 is created in my directory which contains:
2*3 is a valid inequality
I want to know what exactly is going on here and why am I getting this output?
I believe it's obvious that I'm new to Linux!
Thanks
In bash, redirections can occur anywhere in the line (but you shouldn't do it! --- see the bash-hackers tutorial). Bash takes the >5 as a redirection, creates output file 5, and then processes the rest of the arguments. Therefore, echo 2*3 is a valid inequality happens, which gives you the output you see in the output file 5.
What you probably want is
echo "2*3>5 is a valid inequality"
or
echo '2*3>5 is a valid inequality'
(with single-quotes), either of which will give you the message you specify as a printout on the command line. The difference is that, within "", variables (such as $foo) will be filled in, but not within ''.
Edit: The bash man page says that the
redirection operators may precede or appear anywhere within a simple command or may follow a command. Redirections are processed in the order they appear, from left to right.
bash does the output redirection first i.e. >5 is done first and a file named 5 is created (or truncated if it already exists). The resultant file descriptor remains open for the runtime of the echo command.
Then the remaining portion, 2*3 is a valid inequality, runs as the argument to echo and standard output is saved in the (already-open) file 5 eventually.
To get the whole string as the output, use single or double quotes:
echo '2*3>5 is a valid inequality'
This is an example of output redirection. You're instructing the echo statement to, instead of writing to standard out, write to a filename. That filename happens to be "5".
You can avoid that behavior by quoting:
echo "2*3>5 is a valid inequality"

properly using IO redirection to append user input to a file in linux scripting?

I'm just starting to learn linux scripting, and with them user output/inputs. One of the things i need to learn and keep trying to do to no avail is append user input to a file output. Something like
read text > text.dat
or
read text
$text > text.dat
Typically ends up in failure, or the creation of text.dat which ends up empty no matter what is typed in by the user. What am i missing?
The read command, as documented in it's manual file, will take a line of user input and assign it to a variable which you can name as an argument. It will also split the user input and assign it to multiple variables if you pass more than one name. It will do this all in the background without printing any kind of confirmation to the standard out. We also know that the > operator will redirect the standard out of a command to a file descriptor. It is also important to note that unless bash is explicitly told that a line contains multiple commands (by using a semi-colon or similar) it will assume it is all one command with multiple arguments.
So lets have a look at your examples and see what is happening:
read text > text.dat
This will run the read command, which will silently assign the user input to a variable called $text. It will then redirect the output of the command (nothing, as it is silent) to a file called text.dat. End result: an empty text.dat and an unused $text variable.
read text $text > text.dat
Bash will parse this command and first attempt to get the value assigned to the $text variable, at this point it is undefined and so it will be ignored. So it will run the read command, which will silently assign the user input to a variable called $text. It will then redirect the output of the command (nothing, as it is silent) to a file called text.dat. End result: an empty text.dat and an unused $text variable.
So how can we resolve this? The first command is fine, we use read text to allow the user to input a line and have that line assigned to a variable called $text. Then, we need a way to send that variable to standard out so we can redirect it. To do that, we can use the echo command, which we can redirect.
So for example:
read text
echo $text > text.dat
Another thing to note is that the > operator will overwrite the file, to append to it you can use the >> operator.
So to take a user input and append it to a file we have:
read text
echo $text >> text.dat

arbitrary input from stdin to shell

So I have this existing command that accepts a single argument, but I need something that accepts the argument over stdin instead.
A shell script wrapper like the following works, but as I will be allowing untrusted users to pass arbitrary strings on stdin, I'm wondering if there's potential for someone to execute arbitary commands on the shell.
#!/bin/sh
$CMD "`cat`"
Obviously if $CMD has a vulnerability in the way it processes the argument there's nothing I can do, so I'm concerned stuff like this:
Somehow allow the user to escape the double quotes and pass input into argument #2 of $CMD
Somehow cause another arbitary command to run
The parameter looks fine to me, but the command might be a bit shaky, if it can have a space in it. Also, if you're looking to get just one line from the user then you might prefer this:
#!/bin/bash
read line
exec "$CMD" "$line"
A lot of code would be broken if "$(cmd)" could expand to multiple words.

How to automatically pipe to less if the result is more than a page on my shell?

Mostly, I will not use | less for each and every command from the shell.
Pipe to less is used only when I actually run the command without is and find out that it does not fit on the page. That costs me two runs of the same shell command.
Is there a way so that every time a command result is more than a display page, it automatically gets piped to less?
Pipe it to less -F aka --quit-if-one-screen:
Causes less to automatically exit if the entire file can be dis-
played on the first screen.
The most significant problem with trying to do that is how to get it to turn off when running programs that need a tty.
What I would recommend is that, for programs and utilities you frequently use, create shell functions that wrap them and pipe to less -F. In some cases, you can name the function the same as the program and it will take precedence, but can be overridden.
Here is an example wrapper function which would need testing and perhaps some additional code to handle edge cases, etc.
#!/bin/bash
foo () {
if [[ -p /dev/stdout ]] # you don't want to pipe to less if you're piping to something else
then
command foo "$#" | less -F
else
command foo "$#"
fi
}
If you use the same name as I have in the example, it could break things that expect different behavior. To override the function to run the underlying program directly precede it with command:
command foo
will run foo without using the function of the same name.
You could always pipe to less -E (this will cause less to automatically quit at the end of the file). For commands with short output it would do what you want. I don't think you can automatically pipe to less when there is a lot of output.
In general, automatically piping to less requires the shell to be prescient about the output that will be produced by the commands it runs - and it is hard enough for humans to predict that without trying to make programs do so.
You could write a shell that does it for you - that captures the output (but what about stderr?) and paginates if necessary, but it would most certainly not be a standard shell.
I wrote this wrapper function and put it in my .profile. You can use this before a command and it will automatically pipe it to less if it is longer than 1 page.
lcmd ()
{
echo "$("$#")" | less -F;
};
So 'lcmd ls' would ls the current directory and pipe that output to less.

How does my Perl program get standard input on Linux?

I am fairly new to Perl programming, but I have a fair amount of experience with Linux. Let’s say I have the following code:
while(1) {
my $text = <STDIN>;
my $text1 = <STDIN>;
my $text2 = <STDIN>;
}
Now, the main question is: Does STDIN in Perl read directly from /dev/stdin on a Linux machine or do I have to pipe /dev/stdin to the Perl script?
If you don't feed anything to the script, it will sit there waiting for you to enter something. When you do, it will be put into $text and then the script will continue to wait for you to enter something. When you do, that will go into $text1. Subsequently, the script will once again wait for you to enter something. Once that is done, the input will go into $text2. Then, the whole thing will repeat indefinitely.
If you invoke the script as
$ script < input
where input is a file, the script will read lines from the file similar to above, then, when the stream runs out, will start assigning undef to each variable for an infinite period of time.
AFAIK, there is no programming language where reading from the predefined STDIN (or stdin) file handle requires you to invoke your program as:
$ script < /dev/stdin
It reads directly from the STDIN file descriptor. If you run that script it will just wait for input; if you pipe data to it, it will loop until all the data is consumed and then wait forever.
You may want to change that to:
while (my $test = <STDIN>) {
# blah de blah
}
so an EOF will terminate your program.
Perl's STDIN is, by default, just hooked up to whatever the standard input file descriptor is. Beyond that, Perl doesn't really care how or where the data came from. It's the same to Perl if you're reading the output from a pipe, redirecting a file, or typing interactively at the terminal.
If you care about each of those situations and you want to handle each differently, then you might try different approaches.

Resources