Issue with reading file content using linux - linux

I want to read contents of a file using a linux shell script. The contents of file_list.txt is:
abc
def
ghi
And the script to read this is read_file_content.sh:
#!/bin/bash
for file in $(cat file_list.txt)
do
"processing "$file
done
When I run the command as ./read_file_content.sh, I get the following error:
./read_file_content.sh: line 6: processing abc: command not found
./read_file_content.sh: line 6: processing def: command not found
./read_file_content.sh: line 6: processing ghi: command not found
Why does this print 'command not found'?

You wrote "processing "$file without any command in front of it.
Bash will take this literally and try to execute it as a command.
To print the text on the screen, you can use echo or printf.
Echo example
echo "processing" "$file"
Printf example
printf "%s\n" "$file"
(This is the recommend way if you're going to process weird filenames that contain - and space characters. See Why is printf better than echo?)
Notice the way I did the quotes, this prevents problems with filenames that contain special characters like stars and spaces.

Related

How to read a line that contains non-string command inside a file via bash

Below is a snapshot of a file called ".bashrc":
I'm beginner in bash and What i'm trying to do in bash is to check if the last two lines inside the file exist and correctly written like for example :
if [ export PATH=/opt/ads2/arm-linux64/bin:$PATH ]
then
echo "found system variable lines"
else
echo "systemvariables do not exists, please insert it in .bashrc"
fi
However, this doesn't seem to be trivial since the tow lines to be shared are not pure string lines.
Thanks in advance
Use grep to find stuff in file contents.
# if file .bashrc contains the line exactly export PATH=....
if grep -Fxq 'export PATH=/opt/ads2/arm-linux64/bin:$PATH' .bashrc ; then
echo "found system variable lines"
else
echo "systemvariables do not exists, please insert it in .bashrc"
fi
Read man grep and decide if you want or not the -F and -x options in grep. For sure research and learn regex - I recommend regex crosswords available on the net. Research also difference between single quoting and double quoting in shell. Remember to check scripts with http://shellcheck.net

For loop in command line runs bash script reading from text file line by line

I have a bash script which asks for two arguments with a space between them. Now I would like to automate filling out the prompt in the command line with reading from a text file. The text file contains a list with the argument combinations.
So something like this in the command line I think;
for line in 'cat text.file' ; do script.sh ; done
Can this be done? What am I missing/doing wrong?
Thanks for the help.
A while loop is probably what you need. Put the space separated strings in the file text.file :
cat text.file
bingo yankee
bravo delta
Then write the script in question like below.
#!/bin/bash
while read -r arg1 arg2
do
/path/to/your/script.sh "$arg1" "$arg2"
done<text.file
Don't use for to read files line by line
Try something like this:
#!/bin/bash
ARGS=
while IFS= read -r line; do
ARGS="${ARGS} ${line}"
done < ./text.file
script.sh "$ARGS"
This would add each line to a variable which then is used as the arguments of your script.
'cat text.file' is a string literal, $(cat text.file) would expand to output of command however cat is useless because bash can read file using redirection, also with quotes it will be treated as a single argument and without it will split at space tab and newlines.
Bash syntax to read a file line by line, but will be slow for big files
while IFS= read -r line; do ... "$line"; done < text.file
unsetting IFS for read command preserves leading spaces
-r option preserves \
another way, to read whole file is content=$(<file), note the < inside the command substitution. so a creative way to read a file to array, each element a non-empty line:
read_to_array () {
local oldsetf=${-//[^f]} oldifs=$IFS
set -f
IFS=$'\n' array_content=($(<"$1")) IFS=$oldifs
[[ $oldsetf ]]||set +f
}
read_to_array "file"
for element in "${array_content[#]}"; do ...; done
oldsetf used to store current set -f or set +f setting
oldifs used to store current IFS
IFS=$'\n' to split on newlines (multiple newlines will be treated as one)
set -f avoid glob expansion for example in case line contains single *
note () around $() to store the result of splitting to an array
If I were to create a solution determined by the literal of what you ask for (using a for loop and parsing lines from a file) I would use iterations determined by the number of lines in the file (if it isn't too large).
Assuming each line has two strings separated by a single space (to be used as positional parameters in your script:
file="$1"
f_count="$(wc -l < $file)"
for line in $(seq 1 $f_count)
do
script.sh $(head -n $line $file | tail -n1) && wait
done
You may have a much better time using sjsam's solution however.

Read line output in a shell script

I want to run a program (when executed it produces logdata) out of a shell script and write the output into a text file. I failed to do so :/
$prog is the executed prog -> socat /dev/ttyUSB0,b9600 STDOUT
$log/$FILE is just path to a .txt file
I had a Perl script to do this:
open (S,$prog) ||die "Cannot open $prog ($!)\n";
open (R,">>","$log") ||die "Cannot open logfile $log!\n";
while (<S>) {
my $date = localtime->strftime('%d.%m.%Y;%H:%M:%S;');
print "$date$_";
}
I tried to do this in a shell script like this
#!/bin/sh
FILE=/var/log/mylogfile.log
SOCAT=/usr/bin/socat
DEV=/dev/ttyUSB0
BAUD=,b9600
PROG=$SOCAT $DEV$BAUD STDOUT
exec 3<&0
exec 0<$PROG
while read -r line
do
DATE=`date +%d.%m.%Y;%H:%M:%S;`
echo $DATE$line >> $FILE
done
exec 0<&3
Doesn't work at all...
How do I read the output of that prog and pipe it into my text file using a shell script? What did I do wrong (if I didn't do everything wrong)?
Final code:
#!/bin/sh
FILE=/var/log/mylogfile.log
SOCAT=/usr/bin/socat
DEV=/dev/ttyUSB0
BAUD=,b9600
CMD="$SOCAT $DEV$BAUD STDOUT"
$CMD |
while read -r line
do
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line" >> $FILE
done
To read from a process, use process substitution
exec 0< <( $PROG )
/bin/sh doesn't support it, so use /bin/bash instead.
To assign several words to a variable, quote or backslash whitespace:
PROG="$SOCAT $DEV$BAUD STDOUT"
Semicolon is special in shell, quote it or backslash it:
DATE=$(date '+%d.%m.%Y;%H:%M:%S;')
Moreover, no exec's are needed:
while ...
...
done < <( $PROG )
You might even add > $FILE after done instead of adding each line separately to the file.
Original answer
You haven't shown the error messages — which would have been helpful.
Your problem, though, is probably this line:
DATE=`date +%d.%m.%Y;%H:%M:%S;`
where the semicolons mark the end of a command, and there likely isn't a command %H that does anything useful, etc.
You need quotes around the format argument to date, and I'd use single quotes for this job:
DATE=$(date +'%d.%m.%Y;%H:%M:%S;')
or even replace the two lines in the body of the loop with:
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line" >> $FILE
The double quotes prevent a variety of problems.
That assumes you fix a bunch of other problems, such as the setting of the variables FILE and prog. Also, I'd probably use:
exec > $FILE
to initially zap the output file and then all subsequent standard output would go to that file, so the echo line becomes:
echo "$(date +'%d.%m.%Y;%H:%M:%S;')$line"
Amended answer
The question was originally missing lots of key information. It eventually got updated to include the complete code.
The problem I identified originally remains an issue, but you weren't running into it because the input redirection was not working. If you want the input to come from a process, use a pipe, or possibly process substitution. However, note that you have #!/bin/sh as your shebang line, and /bin/sh won't recognized process substitution; either change the shebang or use the pipe notation. Note that process substitution has advantages if the loop is setting variables that need to be accessed after the loop is complete.
$SOCAT $DEV$BAUD STDOUT |
while read -r line
do
…
done
or
while read -r line
do
…
done < <($SOCAT $DEV$BAUD STDOUT)
Note that your code contains the line:
PROG=$SOCAT $DEV$BAUD STDOUT
This runs the command identified by $DEV$BAUD with the argument STDOUT and the environment variable PROG set to the value of $SOCAT. That is not what you wanted.
You could use an array:
PROG=($SOCAT $DEV$BAUD STDOUT)
and then run:
"${PROG[#]}"
either in the pipe line:
"${PROG[#]}" |
while read -r line
do
…
done
or with process substitution:
while read -r line
do
…
done < <("${PROG[#]}")
Note that unless there is code after the final exec 0<&3, there was no particular virtue in the redirections involving file descriptor 3. You should also close 3 when you're done with it:
exec 0<&3 3>&-
The 'final' code includes the lines:
CMD="$SOCAT $DEV$BAUD STDOUT"
$CMD |
while read -r line
This works OK because there are no spaces in the arguments to the command. That's a common case, but beware of spaces in arguments and file paths.

Reading from STDIN, performing commands, then Outputting to STDOUT in Bash

I need to:
Accept STDIN in my script from a pipe
save it to a temp file so that I don't modify the original source
perform operations on the temp file to generate some output
output to STDOUT
Here is my script:
#!/bin/bash
temp=$(cat)
sed 's/the/THE/g' <temp
echo "$temp"
Right now, I am just trying to get it to be able to replace all occurences of "the" with "THE".
Here is the sample text:
the quick brown fox jumped over the lazy
brown dog the quick
brown fox jumped
over
Here is my command line:
cat test.txt | ./hwscript >hwscriptout
"test.txt" contains the sample text, "hwscript" is the script, "hwscriptout" is the output
However, when I look at the output file, nothing has changed (all of occurences of "the" remain uncapitalized). When I do the sed command on the command line instead of the script, it works though. I also tried to use $(sed) instead of sed but when I did that, the command returned an error:
"./hwscript: line 5: s/the/THE/g: no such file or directory"
I have tried to search for a solution but could not find one.
Help is appreciated, thank you.
save it to a temp file so that I don't modify the original source
Anything received via stdin is just a stream of data, disconnected from wherever it originated from: whatever you do with that stream has no effect whatsoever on its origin.
Thus, there is no need to involve a temporary file - simply modify stdin input as needed.
#!/bin/bash
sed 's/the/THE/g' # without a filename operand or pipe input, this will read from stdin
# Without an output redirection, the output will go to stdout.
As you can tell, in this simple case you may as well use the sed command directly, without creating a script.
Use this:
temp=$(sed 's/the/THE/' <<<"$temp")
or
temp=$(printf "%s" "$temp" | sed 's/the/THE/')
You were telling sed to process a file named temp, not the contents of the variable $temp. You also weren't saving the result anywhere, so echo "$temp" simply prints the old value
Here is a way to do it as you described it
#!/bin/sh
# Read the input and append to tmp file
while read LINE; do
echo ${LINE} >> yourtmpfile
done
# Edit the file in place
sed -i '' 's/the/THE/g' yourtmpfile
#Output the result
cat yourtmpfile
rm yourtmpfile
And here is a simpler way without a tmp file
#!/bin/sh
# Read the input and output the line after sed
while read LINE; do
echo ${LINE} | sed 's/the/THE/g'
done

Looping through lines in a file in bash, without using stdin

I am foxed by the following situation.
I have a file list.txt that I want to run through line by line, in a loop, in bash. A typical line in list.txt has spaces in. The problem is that the loop contains a "read" command. I want to write this loop in bash rather than something like perl. I can't do it :-(
Here's how I would usually write a loop to read from a file line by line:
while read p; do
echo $p
echo "Hit enter for the next one."
read x
done < list.txt
This doesn't work though, because of course "read x" will be reading from list.txt rather than the keyboard.
And this doesn't work either:
for i in `cat list.txt`; do
echo $i
echo "Hit enter for the next one."
read x
done
because the lines in list.txt have spaces in.
I have two proposed solutions, both of which stink:
1) I could edit list.txt, and globally replace all spaces with "THERE_SHOULD_BE_A_SPACE_HERE" . I could then use something like sed, within my loop, to replace THERE_SHOULD_BE_A_SPACE_HERE with a space and I'd be all set. I don't like this for the stupid reason that it will fail if any of the lines in list.txt contain the phrase THERE_SHOULD_BE_A_SPACE_HERE (so malicious users can mess me up).
2) I could use the while loop with stdin and then in each loop I could actually launch e.g. a new terminal, which would be unaffected by the goings-on involving stdin in the original shell. I tried this and I did get it to work, but it was ugly: I want to wrap all this up in a shell script and I don't want that shell script to be randomly opening new windows. What would be nice, and what might somehow be the answer to this question, would be if I could figure out how to somehow invoke a new shell in the command and feed commands to it without feeding stdin to it, but I can't get it to work. For example this doesn't work and I don't really know why:
while read p; do
bash -c "echo $p; echo ""Press enter for the next one.""; read x;";
done < list.txt
This attempt seems to fail because "read x", despite being in a different shell somehow, is still seemingly reading from list.txt. But I feel like I might be close with this one -- who knows.
Help!
You must open as a different file descriptor
while read p <&3; do
echo "$p"
echo 'Hit enter for the next one'
read x
done 3< list.txt
Update: Just ignore the lengthy discussion in the comments below. It has nothing to do with the question or this answer.
I would probably count lines in a file and iterate each of those using eg. sed. It is also possible to read infinitely from stdin by changing while condition to: while true; and exit reading with ctrl+c.
line=0 lines=$(sed -n '$=' in.file)
while [ $line -lt $lines ]
do
let line++
sed -n "${line}p" in.file
echo "Hit enter for the next ${line} of ${lines}."
read -s x
done
AWK is also great tool for this. Simple way to iterate through input would be like:
awk '{ print $0; printf "%s", "Hit enter for the next"; getline < "-" }' file
As an alternative, you can read from stderr, which by default is connected to the tty as well. The following then also includes a test for that assumption:
(
tty -s <& 2|| exit 1
while read -r line; do
echo "$line"
echo 'Hit enter'
read x <& 2
done < file
)

Resources