Why does read throw an error in bash but works fine? - linux

This bash script writes an array to a file and then reads the file back into a different array. (This is useful for array-based communication between scripts.) However, a strange, non-reported error is trapped by the IFS line (line 12). Why?
#!/bin/bash
# eso-error-ic
trap 'echo Error trapped, with code $?, on line ${LINENO}' ERR
# write data to a file
arr=(0 abc) && printf "%s\n" "${arr[#]}" > eso.out
# read data from the file into an array
# throws an error!!
IFS=$'\n' read -d '' -a new_arr < eso.out
# but it worked...
echo ${new_arr[0]}
echo ${new_arr[1]}
Script output:
Error trapped, with code 1, on line 12
0
abc
What's missing is any sort of message displayed when the error is produced. All you get is the message from the trap but no message about what the error is.
In other words, the IFS/read line produces an error, which is trapped, but no error message is displayed and the line properly reads the file into an array variable. It works, reports no error, but an "error" is trapped.
If you comment out the trap line OR switch to the command/eval/cat approach to reading a file into an array (as suggested here), no error is trapped. Here is what the command/eval/cat line would look like for this script (to replace line 12):
IFS=$'\n' GLOBIGNORE='*' command eval 'new_arr=($(cat eso.out))'

The error comes from not receiving the delimiter that read was expecting. I get the same with
read -d x variable <<<"hello"
If I change the input to "hellox" the error disappears.
As mentioned by #Aserre, a detailed analys is at our Unix & Linux sister site and as pointed out by #CharlesDuffy a common workaround is
read variable || [[ $variable ]]
which is used even without -d to cope with files which might lack the final terminating newline.

Related

Getting "ambiguous redirect" error in my shell script [duplicate]

The following line in my Bash script
echo $AAAA" "$DDDD" "$MOL_TAG >> ${OUPUT_RESULTS}
gives me this error:
line 46: ${OUPUT_RESULTS}: ambiguous redirect
Why?
Bash can be pretty obtuse sometimes.
The following commands all return different error messages for basically the same error:
$ echo hello >
bash: syntax error near unexpected token `newline`
$ echo hello > ${NONEXISTENT}
bash: ${NONEXISTENT}: ambiguous redirect
$ echo hello > "${NONEXISTENT}"
bash: : No such file or directory
Adding quotes around the variable seems to be a good way to deal with the "ambiguous redirect" message: You tend to get a better message when you've made a typing mistake -- and when the error is due to spaces in the filename, using quotes is the fix.
Do you have a variable named OUPUT_RESULTS or is it the more likely OUTPUT_RESULTS?
michael#isolde:~/junk$ ABC=junk.txt
michael#isolde:~/junk$ echo "Booger" > $ABC
michael#isolde:~/junk$ echo "Booger" >> $ABB
bash: $ABB: ambiguous redirect
michael#isolde:~/junk$
put quotes around your variable. If it happens to have spaces, it will give you "ambiguous redirect" as well. also check your spelling
echo $AAAA" "$DDDD" "$MOL_TAG >> "${OUPUT_RESULTS}"
eg of ambiguous redirect
$ var="file with spaces"
$ echo $AAAA" "$DDDD" "$MOL_TAG >> ${var}
bash: ${var}: ambiguous redirect
$ echo $AAAA" "$DDDD" "$MOL_TAG >> "${var}"
$ cat file\ with\ spaces
aaaa dddd mol_tag
I've recently found that blanks in the name of the redirect file will cause the "ambiguous redirect" message.
For example if you redirect to application$(date +%Y%m%d%k%M%S).log and you specify the wrong formatting characters, the redirect will fail before 10 AM for example. If however, you used application$(date +%Y%m%d%H%M%S).log it would succeed. This is because the %k format yields ' 9' for 9AM where %H yields '09' for 9AM.
echo $(date +%Y%m%d%k%M%S) gives 20140626 95138
echo $(date +%Y%m%d%H%M%S) gives 20140626095138
The erroneous date might give something like:
echo "a" > myapp20140626 95138.log
where the following is what would be desired:
echo "a" > myapp20140626095138.log
Does the path specified in ${OUPUT_RESULTS} contain any whitespace characters? If so, you may want to consider using ... >> "${OUPUT_RESULTS}" (using quotes).
(You may also want to consider renaming your variable to ${OUTPUT_RESULTS})
If your script's redirect contains a variable, and the script body defines that variable in a section enclosed by parenthesis, you will get the "ambiguous redirect" error. Here's a reproducible example:
vim a.sh to create the script
edit script to contain (logit="/home/ubuntu/test.log" && echo "a") >> ${logit}
chmod +x a.sh to make it executable
a.sh
If you do this, you will get "/home/ubuntu/a.sh: line 1: $logit: ambiguous redirect". This is because
"Placing a list of commands between parentheses causes a subshell to
be created, and each of the commands in list to be executed in that
subshell, without removing non-exported variables. Since the list is
executed in a subshell, variable assignments do not remain in effect
after the subshell completes."
From Using parenthesis to group and expand expressions
To correct this, you can modify the script in step 2 to define the variable outside the parenthesis: logit="/home/ubuntu/test.log" && (echo "a") >> $logit
I got this error when trying to use brace expansion to write output to multiple files.
for example: echo "text" > {f1,f2}.txt results in -bash: {f1,f2}.txt: ambiguous redirect
In this case, use tee to output to multiple files:
echo "text" | tee {f1,f2,...,fn}.txt 1>/dev/null
the 1>/dev/null will prevent the text from being written to stdout
If you want to append to the file(s) use tee -a
If you are here trying to debug this "ambiguous redirect" error with GitHub Actions. I highly suggest trying it this way:
echo "MY_VAR=foobar" >> $GITHUB_ENV
The behavior I experienced with $GITHUB_ENV is that, it adds it to the pipeline environment variables as my example shows MY_VAR
I just had this error in a bash script. The issue was an accidental \ at the end of the previous line that was giving an error.
One other thing that can cause "ambiguous redirect" is \t \n \r in the variable name you are writing too
Maybe not \n\r? But err on the side of caution
Try this
echo "a" > ${output_name//[$'\t\n\r']}
I got hit with this one while parsing HTML, Tabs \t at the beginning of the line.
This might be the case too.
you have not specified the file in a variable and redirecting output to it, then bash will throw this error.
files=`ls`
out_file = /path/to/output_file.t
for i in `echo "$files"`;
do
content=`cat $i`
echo "${content} ${i}" >> ${out_file}
done
out_file variable is not set up correctly so keep an eye on this too.
BTW this code is printing all the content and its filename on the console.
if you are using a variable name in the shell command, you must concatenate it with + sign.
for example :
if you have two files, and you are not going to hard code the file name, instead you want to use the variable name
"input.txt" = x
"output.txt" = y
then ('shell command within quotes' + x > + y)
it will work this way especially if you are using this inside a python program with os.system command probably
In my case, this was a helpful warning, because the target variable (not the file) was misspelled and did not exist.
echo "ja" >> $doesNotExist
resulting in
./howdy.sh: line 4: $doesNotExist: ambiguous redirect
For my case, if I specify the output file via a env (e.g $ENV_OF_LOG_FILE), then will get the error ambiguous redirect.
But, if I use plain text as file path (e.g /path/to/log_file), then there is no error.

How do you append a string built with interpolation of vars and STDIN to a file?

Can someone fix this for me.
It should copy a version log file to backup after moving to a repo directory
Then it automatically appends line given as input to the log file with some formatting.
That's it.
Assume existence of log file and test directory.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG |
VHENTRY="- **${LOGDATE}** | ${VHMSG}"
cat ${VHENTRY} >> versionlog.MD
shell output
virufac#box:~/Git/test$ ~/.logvh.sh
MSG > testing script
EOF
EOL]
EOL
e
E
CTRL + C to get out of stuck in reading lines of input
virufac#box:~/Git/test$ cat versionlog.MD
directly outputs the markdown
# Version Log
## version 0.0.1 established 01-22-2020
*Working Towards Working Mission 1 Demo in 0.1 *
- **01-22-2020** | discovered faker.Faker and deprecated old namelessgen
EOF
EOL]
EOL
e
E
I finally got it to save the damned input lines to the file instead of just echoing the command I wanted to enter on the screen and not executing it. But... why isn't it adding the lines built from the VHENTRY variable... and why doesn't it stop reading after one line sometimes and this time not. You could see I was trying to do something to tell it to stop reading the input.
After some realizing a thing I had done in the script was by accident... I tried to fix it and saw that the | at the end of the read command was seemingly the only reason the script did any of what it did save to the file in the first place.
I would have done this in python3 if I had know this script wouldn't be the simplest thing I had ever done. Now I just have to know how you do it after all the time spent on it so that I can remember never to think a shell script will save time again.
Use printf to write a string to a file. cat tries to read from a file named in the argument list. And when the argument is - it means to read from standard input until EOF. So your script is hanging because it's waiting for you to type all the input.
Don't put quotes around the path when it starts with ~, as the quotes make it a literal instead of expanding to the home directory.
Get rid of | at the end of the read line. read doesn't write anything to stdout, so there's nothing to pipe to the following command.
There isn't really any need for the VHENTRY variable, you can do that formatting in the printf argument.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG
printf -- '- **%s** | %s\n' "${LOGDATE}" "$VHMSG" >> versionlog.MD

No Error from Script for Non-Existent File

I have a shell script that reads a text file and uses its content. So far so good. But now I'm trying to make the script exit if the file is not found. The script looks like this
#!/bin/bash
function errorcatcher() {
errorcode=$?
echo "ERROR CODE : ${errorcode}"
exit ${errorcode}
}
trap errorcatcher ERR
MYFILE=$1
IFS='|'
while read line; do
echo ${line}
done < ${MYFILE}
echo "Execution complete"
And I run the script as
sh myscript.sh /home/mydir/ABC.txt
and it works fine. But if I try this
sh myscript.sh /home/mydir/nonexisting.file
I get
myscript.sh: line 17: /home/mydir/nonexisting.file: No such file or directory
Execution complete
Function errorcatcher does not get invoked and instead of exiting with an error code, the execution continues and I get the line Execution complete even though the file in question doesn't exist. My guess is no error is generated here, so I added this line before reading the text file
ls ${MYFILE}
The errorcatcher gets invoked this time. But if I try
sh myscript.sh /home/mydir/ABC.tx
Instead of existing file ABC.txt, I pass its incomplete name ABC.tx and again, the errorcatcher function is not invoked and the script completes successfully (Execution complete gets echoed).
Could someone help me with this? I'm curious as to why errorcatcher doesn't get invoked
for a non existing file without ls
for incomplete file name (ABC.tx) with ls
Function errorcatcher does not get invoked …
Indeed, with an error in the redirection of a loop like
while read line; do
…
done < ${MYFILE}
the ERR trap is not invoked. You have discovered an undocumented exception in the implementation of the trap command, or, if you prefer, a bug.
You can evade that by adding an additional test of the redirection before the while, e. g. the line
<$MYFILE
on its own will invoke the error trap.

How to capture linux command log into the file?

Let's say I have the below command.
STATE_NOT_C_COUNT=`mongo --host "${DB_HOST}" --port 27017 "${MONGO_DATABASE}" --eval "db.$MONGO_DATABASE.count({\"state\" : {"'"$ne"'":\"C\"},\"physicalTableName\":\"table_name\"},{nolock:true})" | tail -1`
When I used to run the above command, got the exception like
exception: connect failed
I want to capture this exception in into the file via the error function.
error(){
if [ "$?" -ne "0" ]; then
echo "$1" 2>&1 error_log
exit 1
fi
}
I'm using the above function like this:
error $STATE_NOT_C_COUNT
But I'm not able to capture the exception through the function in files.
What you are doing is terrible. Let the program that fails print its error messages to stderr, and ensure that stderr is pointed to the right thing. However, the major issue you are having is just lack of quotes. Try:
error "$STATE_NOT_C_COUNT"
The issue is that the command error $STATE_NOT_C_COUNT is subject to field splitting, so if $STATE_NOT_C_COUNT contains any whitespace it is split into arguments, and you are only writing the first one. Another alternative is to write echo "$#" in the function, but this will squash whitespace. However, it cannot be stressed enough that this is a terrible approach, completely against the unix philosophy. The program should write its error to stderr, and you should let them go there. Just make sure stderr is pointed where you want it. The only possible reason to capture stderr is if you want to write it to multiple locations, so you might pipe it to tee or to a syslogger, or some other message bus, but doing such a thing is questionable.

How to show line number when executing bash script

I have a test script which has a lot of commands and will generate lots of output, I use set -x or set -v and set -e, so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem.
Is there a method which can output the line number of the script before each line is executed?
Or output the line number before the command exhibition generated by set -x?
Or any method which can deal with my script line location problem would be a great help.
Thanks.
You mention that you're already using -x. The variable PS4 denotes the value is the prompt printed before the command line is echoed when the -x option is set and defaults to : followed by space.
You can change PS4 to emit the LINENO (The line number in the script or shell function currently executing).
For example, if your script reads:
$ cat script
foo=10
echo ${foo}
echo $((2 + 2))
Executing it thus would print line numbers:
$ PS4='Line ${LINENO}: ' bash -x script
Line 1: foo=10
Line 2: echo 10
10
Line 3: echo 4
4
http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate PS4 that would output everything you will possibly need for tracing:
export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'
In Bash, $LINENO contains the line number where the script currently executing.
If you need to know the line number where the function was called, try $BASH_LINENO. Note that this variable is an array.
For example:
#!/bin/bash
function log() {
echo "LINENO: ${LINENO}"
echo "BASH_LINENO: ${BASH_LINENO[*]}"
}
function foo() {
log "$#"
}
foo "$#"
See here for details of Bash variables.
PS4 with value $LINENO is what you need,
E.g. Following script (myScript.sh):
#!/bin/bash -xv
PS4='${LINENO}: '
echo "Hello"
echo "World"
Output would be:
./myScript.sh
+echo Hello
3 : Hello
+echo World
4 : World
Workaround for shells without LINENO
In a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.
Define a function
echo_line_no () {
grep -n "$1" $0 | sed "s/echo_line_no//"
# grep the line(s) containing input $1 with line numbers
# replace the function name with nothing
} # echo_line_no
Use it with quotes like
echo_line_no "this is a simple comment with a line number"
Output is
16 "this is a simple comment with a line number"
if the number of this line in the source file is 16.
This basically answers the question How to show line number when executing bash script for users of ash or other shells without LINENO.
Anything more to add?
Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?
Want to know more? Read reflections on debugging
Simple (but powerful) solution: Place echo around the code you think that causes the problem and move the echo line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.
Even more powerful solution: Install bashdb the bash debugger and debug the script line by line
If you're using $LINENO within a function, it will cache the first occurrence. Instead use ${BASH_LINENO[0]}

Resources