How to check for piped data in perl script - linux

I am currently using these two if statements to decide if data is being piped in or is from a file:
pod2usage("$NAME: Requires at least one argument FILE.\n") if ((-t STDIN) && (#ARGV == 0));
pod2usage("$NAME: zero if input is from STDIN.\n") if (!(-t STDIN) && (#ARGV != 0));
This works fine when the perl script is run interactively from the shell. For example these work as expected:
$ perl_script <flags> filename
$ cat | perl_script <flags>
However, when the perl script is called from a bash script or something like org-mode in emacs the script thinks it is having data being piped in and throws the pod2usage error when files are given as arguments. Here is an example that causes this behavior:
#!/bin/bash
while read line
do
perl_script <flags> $line >> output_file
done < file_names.txt
I am guessing that this is happening because -t STDIN is returning false because it is being run non-interactively so it is not attached to a terminal. Is there a way to make sure that I get the proper behavior if the script is being ran interactively or if being called from a shell script?

Try this:
#!/bin/bash
TTY=`tty`
while read line
do
perl_script <flags> $line < $TTY >> output_file
done < file_names.txt

The outer script is supplying file_names.txt on STDIN to the loop, which gets picked up by any child process in the loop. Your logic (in-so-far as your inner script is concerned) is correct. However, it's getting input in 2 ways: a file name on the command line and as a redirect of the file_names.txt file on STDIN supplied to the loop that is not connected to a tty. That's one of my main complaints about looping in bash. The STDIN redirect is kind of a loose canon and gets picked up unwittingly by the loop contents. That's why I like tcsh's foreach:
#!/bin/tcsh
foreach line ( `cat file_names.txt` )
perl_script <flags> $line >> output_file
end
You may be able to do something like this in bash too, but I'm not certain whether your inner script would pick up the parent script's handle on STDIN (because I'm fairly new to bash and have been using tcsh for decades). You can try it though and see if this works:
for line in `cat file_names.txt`; do perl_script <flags> $line >> output_file; done

Related

How to get cat output path as a variable in bash script

I'm using cat to create a new file via a shell script. It looks something like:
./script.sh > output.txt
How can I access output.txt as a variable in my script. I've tried $1 but that doesn't work.
The script looks something like:
#!/bin/sh
cat << EOF
echo "stuff"
EOF
Since there doesn't apear to be an os-agnostic way to do this, is there a way I pass the output into the script as an argument and then save the cat results to a file inside the script?
So the command would look like: ./script.sh output.txt and I can access the output as $1. Is something like this possible?
The Literal Question: Determining Where Your Stdout Was Redirected To
When a user runs:
./yourscript >outfile
...they're telling their shell to open outfile for write, and connect it to the stdout of your script, before starting your script. Consequently, all the operations on the filename are already finished when your script is started, so the name isn't passed to the script directly.
On Linux (only), you can access the location to which your stdout was redirected before your script was started through procfs:
output_dest=$(readlink -f /dev/fd/1)
echo "My output is being written to $output_dest"
This is literally interrogating where your first file descriptor (which is stdout) is open to. Note that the results won't always be useful -- if your program is being piped into something else, for instance, it might be something like [pipe: 12345].
If you care about portability or robustness, you should generally write your software in such a way that it doesn't need to know or care where its stdout is being directed.
The Best Practice: Redirecting Your Script's Stdout Yourself
Better practice, if you need an output filename that your script can access, is to accept that as an explicit argument:
#!/bin/sh
# ^^ note that that makes this a POSIX sh script, not a bash script
outfile=$1
exec >"$outfile" # all commands below here have their output written to outfile
cat >>EOF
This is written to $outfile
EOF
...and then directing the user to pass the filename as an argument:
./yourscript outfile
#!/bin/sh
outfile=$1
cat << EOF > "$outfile"
echo "stuff"
EOF
With
./script.sh output.txt
You write to the file output.txt
Setting a default value, in case the user doesn't pass an argument, is left for a different question.

Execute command substitutions in input read from a file

In shell script how to make script read commands in input file string
Example 1 (script1.sh):
a="google.analytics.account.id=`read a`"
echo $a
Example 2 (script2.sh):
cat script2.sh
a=`head -1 input.txt`
echo $a
Sample input.txt
google.analytics.account.id=`read a`
If I run script1.sh the read command is working fine, but when I am running script2.sh, the read command is not executed, but is printed as part of the output.
So I want script2.sh to have the same output as script1.sh.
Your input.txt contents are effectively executed as a script here; only do this if you entirely trust those contents to run arbitrary commands on your machine. That said:
#!/usr/bin/env bash
# ^^^^- not /bin/sh; needed for $'' and $(<...) syntax.
# generate a random sigil that's unlikely to exist inside your script.txt
# maybe even sigil="EOF-$(uuidgen)" if you're guaranteed to have it.
sigil="EOF-025CAF93-9479-4EDE-97D9-483A3D5472F3"
# generate a shell script which includes your input file as a heredoc
script="cat <<$sigil"$'\n'"$(<input.txt)"$'\n'"$sigil"
# run that script
eval "$script"
In script1.sh the first line is evaluated, therefore the read a is executed and replaced in the string.
In script 2.sh the first line is evaluated, therefore the resulting string from execution of head is put into the variable a.
There is no re-evaluation done on the resulting string. If you add the evaluation with eval $a and the first line in input.txt is exactly as the first line of script1.sh (actually the a="..." is missing) then you might get the same result. The heredoc, as CharlesDuffy suggested, seems more accurate.

Print fifo's content in bash

I want to get a fifo's content and print it in a file, and I have this code:
path=$1 #path file get from script's input
if [ -p "$path" ];then #check if path is pipe
content = 'cat "$path"'
echo "$content" > output
exit 33
fi
My problem is that when I execute the cat "$path" line the script is stopped and the terminal displays the underscore.
I don't know how to solve this problem
P.S the fifo isn't empty and output is the file where I want to print fifo's content
If the FIFO is not empty, and there are no longer any file descriptors writing to that FIFO, you'll get EOF in the cat command. From man 7 pipe:
If all file descriptors referring to the write end of a pipe have been
closed, then an attempt to read(2) from the pipe will see end- of-file
(read(2) will return 0).
Source: man7.org/linux/man-pages/man7/pipe.7.html
Your assignment statement is incorrect.
Whitespace around = is not permitted.
You're confusing single quotes with backquotes. However, you should use $(...) for command substitution anyway.
The correct assignment is
content=$(cat "$path")
or more efficiently in bash,
content=$(< "$path")

Read line output in a shell script

I want to run a program (when executed it produces logdata) out of a shell script and write the output into a text file. I failed to do so :/
$prog is the executed prog -> socat /dev/ttyUSB0,b9600 STDOUT
$log/$FILE is just path to a .txt file
I had a Perl script to do this:
open (S,$prog) ||die "Cannot open $prog ($!)\n";
open (R,">>","$log") ||die "Cannot open logfile $log!\n";
while (<S>) {
my $date = localtime->strftime('%d.%m.%Y;%H:%M:%S;');
print "$date$_";
}
I tried to do this in a shell script like this
#!/bin/sh
FILE=/var/log/mylogfile.log
SOCAT=/usr/bin/socat
DEV=/dev/ttyUSB0
BAUD=,b9600
PROG=$SOCAT $DEV$BAUD STDOUT
exec 3<&0
exec 0<$PROG
while read -r line
do
DATE=`date +%d.%m.%Y;%H:%M:%S;`
echo $DATE$line >> $FILE
done
exec 0<&3
Doesn't work at all...
How do I read the output of that prog and pipe it into my text file using a shell script? What did I do wrong (if I didn't do everything wrong)?
Final code:
#!/bin/sh
FILE=/var/log/mylogfile.log
SOCAT=/usr/bin/socat
DEV=/dev/ttyUSB0
BAUD=,b9600
CMD="$SOCAT $DEV$BAUD STDOUT"
$CMD |
while read -r line
do
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line" >> $FILE
done
To read from a process, use process substitution
exec 0< <( $PROG )
/bin/sh doesn't support it, so use /bin/bash instead.
To assign several words to a variable, quote or backslash whitespace:
PROG="$SOCAT $DEV$BAUD STDOUT"
Semicolon is special in shell, quote it or backslash it:
DATE=$(date '+%d.%m.%Y;%H:%M:%S;')
Moreover, no exec's are needed:
while ...
...
done < <( $PROG )
You might even add > $FILE after done instead of adding each line separately to the file.
Original answer
You haven't shown the error messages — which would have been helpful.
Your problem, though, is probably this line:
DATE=`date +%d.%m.%Y;%H:%M:%S;`
where the semicolons mark the end of a command, and there likely isn't a command %H that does anything useful, etc.
You need quotes around the format argument to date, and I'd use single quotes for this job:
DATE=$(date +'%d.%m.%Y;%H:%M:%S;')
or even replace the two lines in the body of the loop with:
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line" >> $FILE
The double quotes prevent a variety of problems.
That assumes you fix a bunch of other problems, such as the setting of the variables FILE and prog. Also, I'd probably use:
exec > $FILE
to initially zap the output file and then all subsequent standard output would go to that file, so the echo line becomes:
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line"
Amended answer
The question was originally missing lots of key information. It eventually got updated to include the complete code.
The problem I identified originally remains an issue, but you weren't running into it because the input redirection was not working. If you want the input to come from a process, use a pipe, or possibly process substitution. However, note that you have #!/bin/sh as your shebang line, and /bin/sh won't recognized process substitution; either change the shebang or use the pipe notation. Note that process substitution has advantages if the loop is setting variables that need to be accessed after the loop is complete.
$SOCAT $DEV$BAUD STDOUT |
while read -r line
do
…
done
or
while read -r line
do
…
done < <($SOCAT $DEV$BAUD STDOUT)
Note that your code contains the line:
PROG=$SOCAT $DEV$BAUD STDOUT
This runs the command identified by $DEV$BAUD with the argument STDOUT and the environment variable PROG set to the value of $SOCAT. That is not what you wanted.
You could use an array:
PROG=($SOCAT $DEV$BAUD STDOUT)
and then run:
"${PROG[#]}"
either in the pipe line:
"${PROG[#]}" |
while read -r line
do
…
done
or with process substitution:
while read -r line
do
…
done < <("${PROG[#]}")
Note that unless there is code after the final exec 0<&3, there was no particular virtue in the redirections involving file descriptor 3. You should also close 3 when you're done with it:
exec 0<&3 3>&-
The 'final' code includes the lines:
CMD="$SOCAT $DEV$BAUD STDOUT"
$CMD |
while read -r line
This works OK because there are no spaces in the arguments to the command. That's a common case, but beware of spaces in arguments and file paths.

Use I/O redirection between two scripts without waiting for the first to finish

I have two scripts, let's say long.sh and simple.sh: one is very time consuming, the other is very simple. The output of the first script should be used as input of the second one.
As an example, the "long.sh" could be like this:
#!/bin/sh
for line in `cat LONGIFLE.dat` do;
# read line;
# do some complicated processing (time consuming);
echo $line
done;
And the simple one is:
#!/bin/sh
while read a; do
# simple processing;
echo $a + "other stuff"
done;
I want to pipeline the two scripts this:
sh long.sh | sh simple.sh
Using pipelines, the simple.sh has to wait the end of the long script before it could start.
I would like to know if in the bash shell it is possible to see the output of simple.sh per current line, so that I can see at runtime what line is being processed at this moment.
I would prefer not to merge the two scripts together, nor to call the simple.sh inside long.sh.
Thank you very much.
stdout is normally buffered. You want line-buffered. Try
stdbuf -oL sh long.sh | sh simple.sh
Note that this loop
for line in `cat LONGIFLE.dat`; do # see where I put the semi-colon?
reads words from the file. If you only have one word per line, you're OK. Otherwise, to read by lines, use while IFS= read -r line; do ...; done < LONGFILE.dat
Always quote your variables (echo "$line") unless you know specifically when not to.

Resources