"read" command not executing in "while read line" loop [duplicate] - linux

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.

As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Related

How to understand and avoid non-interactive mode errors when running ispell from script?

Background
Ispell is a basic command line spelling program in linux, which I want to call for a previously collected list of file names. These file names are recursively collected from a latex root file for example. This is usefull when requiring to spell all recursively included latex files, and no other files. However, calling ispell from the command line turns out to be non-trivial as ispell gives errors of the form
"Can't deal with non-interactive use yet." in some cases.
(As a side not, ideally I would like to call ispell programmatically from java using the ProcessBuilder class, and without requiring bash. The same error seems to pester this approach however.)
Question
Why is it that ispell gives the error "Can't deal with non-interactive use yet." in certain cases, when called in bash from a loop involving the read method, but not in other cases, as shown in the below code example?
The below minimal code example creates two small files
(testFileOne.txt, testFileTwo.txt) and a file containing the paths of the two created files (testFilesListTemp.txt).
Next, ispell is called for testFilesListTemp.txt in three different ways:
1. With the help of "cat"
2. By first collecting the names as a string, then looping over the substrings in the collected string, and calling ispell for each of them.
3. By looping over the contents of testFilesListTemp.txt directly, and
calling ispell for the extracted paths.
For some reaons the third method does not work, and yields an error
"Can't deal with non-interactive use yet.". Why exactly does this error
occur, and how can it be prevented, and/or is there perhaps another variation
of the third approach that would work without errors?
#!/bin/bash
#ispell ./testFiles/ispellTestFile1.txt
# Creating two small files and a file with file paths for testing
printf "file 1 contents" > testFileOne.txt
printf "file 2 contents. With a spelling eeeeror." > testFileTwo.txt
printf "./testFileOne.txt\n./testFileTwo.txt\n" > testFilesListTemp.txt
COLLECTED_LATEX_FILE_NAMES_FILE=testFilesListTemp.txt
# Approach 1: produce list of file names with cat and
# pass as argumentto ispell
# WORKS
ispell $(cat $COLLECTED_LATEX_FILE_NAMES_FILE)
# Second approach, first collecting file names as long string,
# then looping over substrings and calling ispell for each one of them
FILES=""
while read p; do
echo "read file $p"
FILES="$FILES $p"
done < $COLLECTED_LATEX_FILE_NAMES_FILE
printf "files list: $FILES\n"
for latexName in $FILES; do
echo "filename: $latexName"
ispell $latexName
done
# Third approach, not working
# ispell compmlains in this case about not working in non-interactive
# mode
#: "Can't deal with non-interactive use yet."
while read p; do
ispell "$p"
done < $COLLECTED_LATEX_FILE_NAMES_FILE
The third example does not work, because you redirect standard input. ispell needs a terminal and a user interaction. When you write code like this:
while read p; do
ispell "$p"
done < $COLLECTED_LATEX_FILE_NAMES_FILE
everything that is read from standard input by any program within the loop will be taken from the $COLLECTED_LATEX_FILE_NAMES_FILE file. ispell detects that and refuses operating. However, you can use "description redirection" to make read p read from the file, and ispell "$p" read from the "real" terminal. Just do:
exec 3<&0
while read p; do
ispell "$p" 0<&3
done < $COLLECTED_LATEX_FILE_NAMES_FILE
exec 3<&0 "copies" (saves) your standard input (0, the "terminal") to descriptor 3. And later on you redirect standard input (0) to ispell from that descriptor, by typing 0<&3 (you can omit 0 if you like).

bash: How can I assemble the string: `"filename=output_0.csv"`

I am using a bash script to execute a program. The program must take the following argument. (The program is gnuplot.)
gnuplot -e "filename='output_0.csv'" 'plot.p'
I need to be able to assemble the following string: "filename='output_0.csv'"
My plan is to assemble the string STRING=filename='output_0.csv' and then do the following: gnuplot -r "$STRING" 'plot.p'. Note I left the words STRING without stackoverflow syntax style highlighting to emphasise the string I want to produce.
I'm not particularly proficient at bash, and so I have no idea how to do this.
I think that strings can be concatenated by using STRING="$STRING"stuff to append to string? I think that may be required?
As an extra layer of complication the value 0 is actually an integer which should increment by 1 each time the program is run. (Done by a for loop.) If I have n=1 in my program, how can I replace the 0 in the string by the "string value" or text version of the integer n?
A safest way to append something to an existing string would be to include squiggly brackets and quotes:
STRING="something"
STRING="${STRING}else"
You can create the "dynamic" portion of your command line with something like this:
somevalue=0
STRING="filename='output_${somevalue}.csv'"
There are other tools like printf which can handle more complex formatting.
somevalue=1
fmt="filename='output_%s.csv'"
STRING="$(printf "$fmt" "$somevalue")"
Regarding your "extra layer of complication", I gather that this increment has to happen in such a way as to store the value somewhere outside the program, or you'd be able to use a for loop to handle things. You can use temporary files for this:
#!/usr/bin/env bash
# Specify our counter file
counter=/tmp/my_counter
# If it doesn't exist, "prime" it with zero
if [ ! -f "$counter" ]; then
echo "0" > $counter
fi
# And if it STILL doesn't exist, fail.
if [ ! -f "$counter" ]; then
echo "ERROR: can't create counter." >&2
fi
# Read the last value...
read value < "$counter"
# and set up our string, per your question.
STRING="$(printf "filename='output_%d.csv'" "${value}")"
# Last, run your command, and if it succeeds, update the stored counter.
gnuplot -e "$STRING" 'plot.p' && echo "$((value + 1))" > $counter
As always, there's more than one way to solve this problem. With luck, this will give you a head start on your reading of the bash man page and other StackOverflow questions which will help you learn what you need!
An answer was posted, which I thought I had accepted already, but for some reason it has been deleted, possibly because it didn't quite answer the question.
I posted another similar question, and the answer to that helped me also answer this question. You can find said question and answer here: bash: Execute a string as a command

Read filename with * shell bash

I'am new in Linux and I want to write a bash script that can read in a file name of a directory that starts with LED + some numbers.(Ex.: LED5.5.002)
In that directory there is only one file that will starts with LED. The problem is that this file will every time be updated, so the next time it will be for example LED6.5.012 and counting.
I searched and tried a little bit and came to this solution:
export fspec=/home/led/LED*
LedV=`basename $fspec`
echo $LedV
If I give in those commands one by one in my terminal it works fine, LedV= LED5.5.002 but if i run it in a bash scripts it gives the result: LedV = LED*
I search after another solution:
a=/home/led/LED*
LedV=$(basename $a)
echo $LedV
but here again the same, if i give it in one by one it's ok but in a script: LedV = LED*.
It's probably something small but because of my lack of knowledge over Linux I cannot find it. So can someone tell what is wrong?
Thanks! Jan
Shell expansions don't happen on scalar assignments, so in
varname=foo*
the expansion of "$varname" will literally be "foo*". It's more confusing when you consider that echo $varname (or in your case basename $varname; either way without the double quotes) will cause the expansion itself to be treated as a glob, so you may well think the variable contains all those filenames.
Array expansions are another story. You might just want
fspec=( /path/LED* )
echo "${fspec[0]##*/}" # A parameter expansion to strip off the dirname
That will work fine for bash. Since POSIX sh doesn't have arrays like this, I like to give an alternative approach:
for fspec in /path/LED*; do
break
done
echo "${fspec##*/}"
pwd
/usr/local/src
ls -1 /usr/local/src/mysql*
/usr/local/src/mysql-cluster-gpl-7.3.4-linux-glibc2.5-x86_64.tar.gz
/usr/local/src/mysql-dump_test_all_dbs.sql
if you only have 1 file, you will only get 1 result
MyFile=`ls -1 /home/led/LED*`

One liner to append a file into another file but only if it hasn't already been added

I have an automated process that has a number of lines like the following pattern:
sudo cat /some/path/to/a/file >> /some/other/file
I'd like to transform that into a one liner that will only append to /some/other/file if /some/path/to/a/file has not already been added.
Edit
It's clear I need some examples here.
example 1: Updating a .bashrc script for a specific login
example 2: Creating a .screenrc for different logins
example 3: Appending to the end of a /etc/ config file
Some other caveats. The text is going to be added in a block (>>). Consequently, it should be relatively straight forward to see if the entire code block is added or not near the end of a file. I am trying to come up with a simple method for determining whether or not the file has already been appended to the original.
Thanks!
Example python script...
def check_for_appended(new_file, original_file):
""" Checks original_file to see if it has the contents of new_file """
new_lines = reversed(new_file.split("\n"))
original_lines = reversed(original_file.split("\n"))
appended = None
for new_line, orig_line in zip(new_lines, original_lines):
if new_line != orig_line:
appended = False
break
else:
appended = True
return appended
Maybe this will get you started - this GNU awk script:
gawk -v RS='^$' 'NR==FNR{f1=$0;next} {print (index($0,f1) ? "present" : "absent")}' file1 file2
will tell you if the contents of "file1" are present in "file2". It cannot tell you why, e.g. because you previously concatenated file1 onto the end of file2.
Is that all you need? If not update your question to clarify/explain.
Here's a technique to see if a file contains another file
contains_file_in_file() {
local small=$1
local big=$2
awk -v RS="" '{small=$0; getline; exit !index($0, small)}' "$small" "$big"
}
if ! contains_file_in_file /some/path/to/a/file /some/other/file; then
sudo cat /some/path/to/a/file >> /some/other/file
fi
EDIT: Op just told me in the comments that the files he wants to concatenate are bash scripts -- this brings us back to the good ole C preprocessor include guard tactics:
prepend every file with
if [ -z "$__<filename>__" ]; then __<filename>__=1; else
(of course replacing <filename> with the name of the file) and at the end
fi
this way, you surround the script in each file with a test for something that's only true once.
Does this work for you?
sudo (set -o noclobber; date > /tmp/testfile)
noclobber prevents overwriting an existing file.
I think it doesn't, since you wrote you want to append something but this technique might help.
When the appending all occurs in one script, then use a flag:
if [ -z "${appended_the_file}" ]; then
cat /some/path/to/a/file >> /some/other/file
appended_the_file="Yes I have done it except for permission/right issues"
fi
I would continue into writing a function appendOnce { .. }, with the content above. If you really want an ugly oneliner (ugly: pain for the eye and colleague):
test -z "${ugly}" && cat /some/path/to/a/file >> /some/other/file && ugly="dirt"
Combining this with sudo:
test -z "${ugly}" && sudo "cat /some/path/to/a/file >> /some/other/file" && ugly="dirt"
It appears that what you want is a collection of script segments which can be run as a unit. Your approach -- making them into a single file -- is hard to maintain and subject to a variety of race conditions, making its implementation tricky.
A far simpler approach, similar to that used by most modern Linux distributions, is to create a directory of scripts, say ~/.bashrc.d and keep each chunk as an individual file in that directory.
The driver (which replaces the concatenation of all those files) just runs the scripts in the directory one at a time:
if [[ -d ~/.bashrc.d ]]; then
for f in ~/.bashrc.d/*; do
if [[ -f "$f" ]]; then
source "$f"
fi
done
fi
To add a file from a skeleton directory, just make a new symlink.
add_fragment() {
if [[ -f "$FRAGMENT_SKELETON/$1" ]]; then
# The following will silently fail if the symlink already
# exists. If you wanted to report that, you could add || echo...
ln -s "$FRAGMENT_SKELETON/$1" "~/.bashrc.d/$1" 2>>/dev/null
else
echo "Not a valid fragment name: '$1'"
exit 1
fi
}
Of course, it is possible to effectively index the files by contents rather than by name. But in most cases, indexing by name will work better, because it is robust against editing the script fragment. If you used content checks (md5sum, for example), you would run the risk of having an old and a new version of the same fragment, both active, and without an obvious way to remove the old one.
But it should be straight-forward to adapt the above structure to whatever requirements and constraints you might have.
For example, if symlinks are not possible (because the skeleton and the instance do not share a filesystem, for example), then you can copy the files instead. You might want to avoid the copy if the file is already present and has the same content, but that's just for efficiency and it might not be very important if the script fragments are small. Alternatively, you could use rsync to keep the skeleton and the instance(s) in sync with each other; that would be a very reliable and low-maintenance solution.

Reading the path of files as string in shell script

My Aim -->
Files Listing from a command has to be read line by line and be used as part of another command.
Description -->
A command in linux returns
archive/Crow.java
archive/Kaka.java
mypmdhook.sh
which is stored in changed_files variable. I use the following while loop to read the files line by line and use it as part of a pmd command
while read each_file
do
echo "Inside Loop -- $each_file"
done<$changed_files
I am new to writing shell script but my assumption was that the lines would've been separated in the loop and printed in each iteration but instead I get the following error --
mypmdhook.sh: 7: mypmdhook.sh: cannot open archive/Crow.java
archive/Kaka.java
mypmdhook.sh: No such file
Can you tell me how I can just get the value as a string and not as a file what is opened. By the way, the file does exist which made me feel even more confused.(and later use it inside a command). I'd be happy with any kind of answer that helps me understand and resolve this issue.
Since you have data stored in a variable, use a "here string" instead of file redirection:
changed_files="archive/Crow.java
archive/Kaka.java
mypmdhook.sh"
while read each_file
do
echo "Inside Loop -- $each_file"
done <<< "$changed_files"
Inside Loop -- archive/Crow.java
Inside Loop -- archive/Kaka.java
Inside Loop -- mypmdhook.sh
Extremely important to quote "$changed_files" in order to preserve the newlines, so the while-read loop works as you expect. A rule of thumb: always quote variables, unless you knows exactly why you want to leave the quotes off.
What happens here is that the value of your variable $changed_files is substituted into your command, and you get something like
while read each_file
do
echo "Inside Loop -- $each_file"
done < archive/Crow.java
archive/Kaka.java
mypmdhook.sh
then the shell tries to open the file for redirecting the input and obviously fails.
The point is that redirections (e.g. <, >, >>) in most cases accept filenames, but what you really need is to give the contents of the variable to the stdin. The most obvious way to do that is
echo $changed_files | while read each_file; do echo "Inside Loop -- $each_file"; done
You can also use the for loop instead of while read:
for each_file in $changed_files; do echo "inside Loop -- $each_file"; done
I prefer using while read ... if there is a chance that some filename may contain spaces, but in most cases for ... in will work for you.
Rather than storing command's output in a variable use while loop like this:
mycommand | while read -r each_file; do echo "Inside Loop -- $each_file"; done
If you're using BASH you can use process substitution:
while read -r each_file; do echo "Inside Loop -- $each_file"; done < <(mycommand)
btw your attempt of done<$changed_files will assume that changed_files represents a file.

Resources