How to get cat output path as a variable in bash script - linux

I'm using cat to create a new file via a shell script. It looks something like:
./script.sh > output.txt
How can I access output.txt as a variable in my script. I've tried $1 but that doesn't work.
The script looks something like:
#!/bin/sh
cat << EOF
echo "stuff"
EOF
Since there doesn't apear to be an os-agnostic way to do this, is there a way I pass the output into the script as an argument and then save the cat results to a file inside the script?
So the command would look like: ./script.sh output.txt and I can access the output as $1. Is something like this possible?

The Literal Question: Determining Where Your Stdout Was Redirected To
When a user runs:
./yourscript >outfile
...they're telling their shell to open outfile for write, and connect it to the stdout of your script, before starting your script. Consequently, all the operations on the filename are already finished when your script is started, so the name isn't passed to the script directly.
On Linux (only), you can access the location to which your stdout was redirected before your script was started through procfs:
output_dest=$(readlink -f /dev/fd/1)
echo "My output is being written to $output_dest"
This is literally interrogating where your first file descriptor (which is stdout) is open to. Note that the results won't always be useful -- if your program is being piped into something else, for instance, it might be something like [pipe: 12345].
If you care about portability or robustness, you should generally write your software in such a way that it doesn't need to know or care where its stdout is being directed.
The Best Practice: Redirecting Your Script's Stdout Yourself
Better practice, if you need an output filename that your script can access, is to accept that as an explicit argument:
#!/bin/sh
# ^^ note that that makes this a POSIX sh script, not a bash script
outfile=$1
exec >"$outfile" # all commands below here have their output written to outfile
cat >>EOF
This is written to $outfile
EOF
...and then directing the user to pass the filename as an argument:
./yourscript outfile

#!/bin/sh
outfile=$1
cat << EOF > "$outfile"
echo "stuff"
EOF
With
./script.sh output.txt
You write to the file output.txt
Setting a default value, in case the user doesn't pass an argument, is left for a different question.

Related

Execute command substitutions in input read from a file

In shell script how to make script read commands in input file string
Example 1 (script1.sh):
a="google.analytics.account.id=`read a`"
echo $a
Example 2 (script2.sh):
cat script2.sh
a=`head -1 input.txt`
echo $a
Sample input.txt
google.analytics.account.id=`read a`
If I run script1.sh the read command is working fine, but when I am running script2.sh, the read command is not executed, but is printed as part of the output.
So I want script2.sh to have the same output as script1.sh.
Your input.txt contents are effectively executed as a script here; only do this if you entirely trust those contents to run arbitrary commands on your machine. That said:
#!/usr/bin/env bash
# ^^^^- not /bin/sh; needed for $'' and $(<...) syntax.
# generate a random sigil that's unlikely to exist inside your script.txt
# maybe even sigil="EOF-$(uuidgen)" if you're guaranteed to have it.
sigil="EOF-025CAF93-9479-4EDE-97D9-483A3D5472F3"
# generate a shell script which includes your input file as a heredoc
script="cat <<$sigil"$'\n'"$(<input.txt)"$'\n'"$sigil"
# run that script
eval "$script"
In script1.sh the first line is evaluated, therefore the read a is executed and replaced in the string.
In script 2.sh the first line is evaluated, therefore the resulting string from execution of head is put into the variable a.
There is no re-evaluation done on the resulting string. If you add the evaluation with eval $a and the first line in input.txt is exactly as the first line of script1.sh (actually the a="..." is missing) then you might get the same result. The heredoc, as CharlesDuffy suggested, seems more accurate.

Print fifo's content in bash

I want to get a fifo's content and print it in a file, and I have this code:
path=$1 #path file get from script's input
if [ -p "$path" ];then #check if path is pipe
content = 'cat "$path"'
echo "$content" > output
exit 33
fi
My problem is that when I execute the cat "$path" line the script is stopped and the terminal displays the underscore.
I don't know how to solve this problem
P.S the fifo isn't empty and output is the file where I want to print fifo's content
If the FIFO is not empty, and there are no longer any file descriptors writing to that FIFO, you'll get EOF in the cat command. From man 7 pipe:
If all file descriptors referring to the write end of a pipe have been
closed, then an attempt to read(2) from the pipe will see end- of-file
(read(2) will return 0).
Source: man7.org/linux/man-pages/man7/pipe.7.html
Your assignment statement is incorrect.
Whitespace around = is not permitted.
You're confusing single quotes with backquotes. However, you should use $(...) for command substitution anyway.
The correct assignment is
content=$(cat "$path")
or more efficiently in bash,
content=$(< "$path")

Read line output in a shell script

I want to run a program (when executed it produces logdata) out of a shell script and write the output into a text file. I failed to do so :/
$prog is the executed prog -> socat /dev/ttyUSB0,b9600 STDOUT
$log/$FILE is just path to a .txt file
I had a Perl script to do this:
open (S,$prog) ||die "Cannot open $prog ($!)\n";
open (R,">>","$log") ||die "Cannot open logfile $log!\n";
while (<S>) {
my $date = localtime->strftime('%d.%m.%Y;%H:%M:%S;');
print "$date$_";
}
I tried to do this in a shell script like this
#!/bin/sh
FILE=/var/log/mylogfile.log
SOCAT=/usr/bin/socat
DEV=/dev/ttyUSB0
BAUD=,b9600
PROG=$SOCAT $DEV$BAUD STDOUT
exec 3<&0
exec 0<$PROG
while read -r line
do
DATE=`date +%d.%m.%Y;%H:%M:%S;`
echo $DATE$line >> $FILE
done
exec 0<&3
Doesn't work at all...
How do I read the output of that prog and pipe it into my text file using a shell script? What did I do wrong (if I didn't do everything wrong)?
Final code:
#!/bin/sh
FILE=/var/log/mylogfile.log
SOCAT=/usr/bin/socat
DEV=/dev/ttyUSB0
BAUD=,b9600
CMD="$SOCAT $DEV$BAUD STDOUT"
$CMD |
while read -r line
do
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line" >> $FILE
done
To read from a process, use process substitution
exec 0< <( $PROG )
/bin/sh doesn't support it, so use /bin/bash instead.
To assign several words to a variable, quote or backslash whitespace:
PROG="$SOCAT $DEV$BAUD STDOUT"
Semicolon is special in shell, quote it or backslash it:
DATE=$(date '+%d.%m.%Y;%H:%M:%S;')
Moreover, no exec's are needed:
while ...
...
done < <( $PROG )
You might even add > $FILE after done instead of adding each line separately to the file.
Original answer
You haven't shown the error messages — which would have been helpful.
Your problem, though, is probably this line:
DATE=`date +%d.%m.%Y;%H:%M:%S;`
where the semicolons mark the end of a command, and there likely isn't a command %H that does anything useful, etc.
You need quotes around the format argument to date, and I'd use single quotes for this job:
DATE=$(date +'%d.%m.%Y;%H:%M:%S;')
or even replace the two lines in the body of the loop with:
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line" >> $FILE
The double quotes prevent a variety of problems.
That assumes you fix a bunch of other problems, such as the setting of the variables FILE and prog. Also, I'd probably use:
exec > $FILE
to initially zap the output file and then all subsequent standard output would go to that file, so the echo line becomes:
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line"
Amended answer
The question was originally missing lots of key information. It eventually got updated to include the complete code.
The problem I identified originally remains an issue, but you weren't running into it because the input redirection was not working. If you want the input to come from a process, use a pipe, or possibly process substitution. However, note that you have #!/bin/sh as your shebang line, and /bin/sh won't recognized process substitution; either change the shebang or use the pipe notation. Note that process substitution has advantages if the loop is setting variables that need to be accessed after the loop is complete.
$SOCAT $DEV$BAUD STDOUT |
while read -r line
do
…
done
or
while read -r line
do
…
done < <($SOCAT $DEV$BAUD STDOUT)
Note that your code contains the line:
PROG=$SOCAT $DEV$BAUD STDOUT
This runs the command identified by $DEV$BAUD with the argument STDOUT and the environment variable PROG set to the value of $SOCAT. That is not what you wanted.
You could use an array:
PROG=($SOCAT $DEV$BAUD STDOUT)
and then run:
"${PROG[#]}"
either in the pipe line:
"${PROG[#]}" |
while read -r line
do
…
done
or with process substitution:
while read -r line
do
…
done < <("${PROG[#]}")
Note that unless there is code after the final exec 0<&3, there was no particular virtue in the redirections involving file descriptor 3. You should also close 3 when you're done with it:
exec 0<&3 3>&-
The 'final' code includes the lines:
CMD="$SOCAT $DEV$BAUD STDOUT"
$CMD |
while read -r line
This works OK because there are no spaces in the arguments to the command. That's a common case, but beware of spaces in arguments and file paths.

run cat command for all the files in the directory given in argument of the script file and out put with the name given as second argument

I run the following code for concatenating files in a directory given as the argument for the script file in bash
for i in $*
do
cat $* > /home/christy/Documents/filetest/catted.txt
done
This produce the error
cat: /home/christy/Documents/filetest/catted.txt: input file is output file
I think there are at least 4 things wrong with your script....
Firstly, your loop will set the value of i to the name of each file in succession, so you would want to actually use i inside your loop, like this:
for i in $*
cat "$i" ....somewhere
done
Secondly, if you use the > redirection, each file will land exactly on top of the previous one, so you should really use the >> redirection will append the current file to the end of the previous one like this
for i in $*
do
cat "$i" >> ...somewhere
done
Thirdly, I think you should use double-quoted "$#" to get all your command-line arguments, rather than plain $*
for i in "$#"
...
Fourthly, you can achieve the exact effect I think you want with this simpler command:
cat "$#" > /home/christy/Documents/filetest/catted.txt
You can't cat a file back onto itself. That's what "input file is output file" means. Because catted.txt shows up in your list of arguments to cat, it is going to try to cat to itself. So, move catted.txt to somewhere other than the source directory.

How to redirect output to a file and stdout

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Resources