Linux "echo -n" not being flushed - linux

I have the following code:
while ...
echo -n "some text"
done | while read; do
echo "$REPLY" >> file
done
but echo works only when used without "-n" flag.
looks like when using -n, the output is not flushed/read by next while loop
How can I make sure that "some text" will be read even when not followed by EOL?

You can't distinguish between
echo -n "some text"
and
echo -n "some t"
echo -n "ext"
so you need some kind of delimiting rule. Usually EOL is used for that. read supports custom delimiter via -d or can split based on number of chars via -n or -N. For example you can make read fire on each symbol:
echo -n qwe | while read -N 1 ch; do echo $ch; done

The workaround would be (following original example):
while ...
echo -n "some text"
done | (cat && echo) | while read; do
echo "$REPLY" >> file
done
This will append EOL to the test stream & allow read to read it.
The side effect will be an additional EOL at the end of stream.

You can start with defining your own delimiter:
while :; do
echo -n "some text"
sleep 2
done | while read -d' ' reply; do
echo "-$reply-"
done
This prints:
-some-
-textsome-
-textsome-
For an email perhaps it makes sense to use . as a delimiter, but you need to decide on some tokenization scheme.

You can make read read one char a time, but should add something for reading special characters (newlines, spaces): IFS=.
I want to show that I really capture the characters, so I will uppercase the replies.
i=0
while (( i++<5 )) ; do
echo -n "some text $i. "
sleep 1;
done | while IFS= read -rn1 reply; do
printf "%s" "${reply^^}"
done
This solution has one feature: You will not see any newlines.
When you want to see them too, you need to fix this with
i=1
while (( i++<5 )) ; do
echo -n "some text $i.
second line."
sleep 1;
done | while IFS= read -rn1 reply; do
if (( ${#reply} == 0 )); then
echo
else
printf "%s" "${reply^^}"
fi
done

Related

pass line to function - overwrite using sed line by line - BASH

why this code not work? What's the problem of passing $line to a function?
function a {
echo $1 | grep $2
}
while read -r line; do
a $line "LAN"
done < database.txt
Another question, i have to overwrite line by line a txt file possibly using sed command.But not all the line, only the part to change. Something like this:
while read -r line; do
echo $line | sed "s/STRING1/STRING2/"
done < namefile
EDIT
I give you an example for my second question.
input file:
LAN 1:
[text]11111[text]
[text]22222[text]
[text]33333[text]
LAN 2:
[text]11111[text]
[text]22222[text]
[text]33333[text]
output file:
LAN 1:
[text]44444[text]
[text]22222[text]
[text]33333[text]
LAN 2:
[text]11111[text]
[text]22222[text]
[text]33333[text]
I have to overwrite database.txt so i think to do this line by line using a counter for LAN. This is my code:
while read -r line; do
echo "$line" | grep -q LAN
if [ $? = "0" ]; then
net_count=$((net_count+1))
fi
if [ $net_count = <lan choose before> ]; then # variable that contains lan number chosen by user
echo "$line" | fgrep -q "11111"
if [ "$?" = "0" ]; then
echo $line | sed "s/11111/44444/" > database.txt
break
fi
fi
done < database.txt
Thank you all
Running sed once for every line is typically over a thousand times slower than running just one sed instance processing your whole file of input. Don't do it.
If you want to do string manipulation on a line-by-line basis, use native bash primitives for the purpose, as documented in BashFAQ #100:
a() {
local line=$1 regex=$2
if [[ $line =~ $regex ]]; then
printf '%s\n' "$line"
fi
}
while IFS= read -r line; do
a "$line" LAN
done <database.txt
Likewise, for substring replacements, an appropriate parameter expansion primitive exists:
while read -r line; do
printf '%s\n' "${line//STRING1/STRING2}"
done < namefile
That said, those approaches are only appropriate if you need to iterate line-by-line. Typically, it makes more sense to use a single grep or sed operation, and iterate over the results of those calls if you need to do native bash operations. For instance, the following iterates over the output from grep, as emitted by a process substitution:
regex=LAN
while IFS= read -r line; do
echo "Read line from grep: $line"
done < <(grep -e "$regex" <database.txt)
You need to put $line in quotes for the whole line to be considered as a single parameter. Else, every space character splits the strings as multiple parameter to the function:
#!/bin/bash
function a {
echo "$1" | grep "$2"
}
while read -r line; do
a "$line" "LAN"
done < database.txt
And for the second question, if you want to print only the lines that you modify, you can use the following code:
while read -r line; do
echo "$line" | sed -n "s/STRING1/STRING2/p"
done < namefile
Here, -n will omit the lines that do not match the string; and the flag p makes sure to print the lines that match.

Bash scripting: why is the last line missing from this file append?

I'm writing a bash script to read a set of files line by line and perform some edits. To begin with, I'm simply trying to move the files to backup locations and write them out as-is, to test the script is working. However, it is failing to copy the last line of each file. Here is the snippet:
while IFS= read -r line
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
I obviously want to preserve whitespace when I copy the files, which is why I have set the IFS to null. I can see from the output that the last line of each file is being read, but it never appears in the output.
I've also tried an alternative variation, which does print the last line, but adds a newline to it:
while IFS= read -r line || [ -n "$line" ]
do
echo "Line is ***$line***"
echo "$line" >> $POM
done < $POM.backup
What is the best way to do this do this read-write operation, to write the files exactly as they are, with the correct whitespace and no newlines added?
The command that is adding the line feed (LF) is not the read command, but the echo command. read does not return the line with the delimiter still attached to it; rather, it strips the delimiter off (that is, it strips it off if it was present in the line, IOW, if it just read a complete line).
So, to solve the problem, you have to use echo -n to avoid adding back the delimiter, but only when you have an incomplete line.
Secondly, I've found that when providing read with a NAME (in your case line), it trims leading and trailing whitespace, which I don't think you want. But this can be solved by not providing a NAME at all, and using the default return variable REPLY, which will preserve all whitespace.
So, this should work:
#!/bin/bash
inFile=in;
outFile=out;
rm -f "$outFile";
rc=0;
while [[ $rc -eq 0 ]]; do
read -r;
rc=$?;
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
echo "$REPLY" >>"$outFile";
elif [[ -n "$REPLY" ]]; then ## incomplete line
echo "incomplete=\"$REPLY\"";
echo -n "$REPLY" >>"$outFile";
fi;
done <"$inFile";
exit 0;
Edit: Wow! Three excellent suggestions from Charles Duffy, here's an updated script:
#!/bin/bash
inFile=in;
outFile=out;
while { read -r; rc=$?; [[ $rc -eq 0 || -n "$REPLY" ]]; }; do
if [[ $rc -eq 0 ]]; then ## complete line
echo "complete=\"$REPLY\"";
printf '%s\n' "$REPLY" >&3;
else ## incomplete line
echo "incomplete=\"$REPLY\"";
printf '%s' "$REPLY" >&3;
fi;
done <"$inFile" 3>"$outFile";
exit 0;
After review i wonder if :
{
line=
while IFS= read -r line
do
echo "$line"
line=
done
echo -n "$line"
} <$INFILE >$OUTFILE
is juts not enough...
Here my initial proposal :
#!/bin/bash
INFILE=$1
if [[ -z $INFILE ]]
then
echo "[ERROR] missing input file" >&2
exit 2
fi
OUTFILE=$INFILE.processed
# a way to know if last line is complete or not :
lastline=$(tail -n 1 "$INFILE" | wc -l)
if [[ $lastline == 0 ]]
then
echo "[WARNING] last line is incomplete -" >&2
fi
# we add a newline ANYWAY if it was complete, end of file will be seen as ... empty.
echo | cat $INFILE - | {
first=1
while IFS= read -r line
do
if [[ $first == 1 ]]
then
echo "First Line is ***$line***" >&2
first=0
else
echo "Next Line is ***$line***" >&2
echo
fi
echo -n "$line"
done
} > $OUTFILE
if diff $OUTFILE $INFILE
then
echo "[OK]"
exit 0
else
echo "[KO] processed file differs from input"
exit 1
fi
Idea is to always add a newline at the end of file and to print newlines only BETWEEN lines that are read.
This should work for quite all text files given they are not containing 0 byte ie \0 character, in which case 0 char byte will be lost.
Initial test can be used to decided whether an incomplete text file is acceptable or not.
Add a new line if line is not a line. Like this:
while IFS= read -r line
do
echo "Line is ***$line***";
printf '%s' "$line" >&3;
if [[ ${line: -1} != '\n' ]]
then
printf '\n' >&3;
fi
done < $POM.backup 3>$POM

Trying out my first BASH script, keep getting unexpected end of file

I was trying to make a script that would pretty much automate this process:
http://knowledgelayer.softlayer.com/procedure/add-additional-ips-redhat
Wasn't too sure how well it would work, but didn't get too far before I could get the script to one. Below is the content of the script:
Editing with updated code:
Edit#2: Got it mostly working, however now it runs the loop and skips over the read propmt to get the static IP
#!/bin/bash
path=/etc/sysconfig/network-scripts/
echo "Let's get your IP added :)"
echo""
getnewip()
{
echo read -p "Enter the new IP Address you wish to add: " staticip
}
getserverinfo()
{
gateway=$(netstat -rn | sed -n 3p | awk '{print $2}')
netmask=$(ifconfig eth1 | grep -i 'netmask' | grep -v '127.0.0.1' | awk '{print $4}')
clone=$( ifconfig | grep eth1 | awk '{print $1}' | cut -d: -f2 )
}
rangechecks()
{
cd /etc/sysconfig/network-scripts/
ls ifcfg-eth1-range*
filename==$1
if [[ ! -f $filename ]]; then
touch "$filename"
echo "Created \"$filename\""
fi
digit=1
while true; do
temp_name=$filename-$digit
if [[ ! -f temp_name ]]; then
touch "$temp_name"
echo "Created $path\"$temp_name\""
digit=$((digit + 1 ))
fi
done
}
writeinterfacefile()
{
cat >> "/etc/sysconfig/network-scripts/$1" << EOF
IPADDR_START=$staticip
IPADDR_END=$staticip
NETMASK=$netmask
CLONENUM_START=$((clone+1))
EOF
echo ""
echo "Your information was saved in the file '$1'."
echo ""
}
{
clear
getserverinfo
echo ""
echo "Please verify this information: "
echo "Gateway Address: " "$gateway"
echo "Netmask: " "$netmask"
echo "Your new IP: " "$staticip"
echo ''
while true; do
read -p "Is this information correct? [y/N]" yn
case $yn in
[Yy]* ) $writeinterfacefile;;
[Nn]* ) print "$getserverinfo" && exit ;;
* ) echo 'Please enter Y or n';;
esac
done
}
I'm fairly new at scripting, so excuse the horrid syntax. My eye is on that EOF but I have no clue.
rangecheckshas no }, your while has no done...
You should indent your code. I started doing that, and noticed the error right away.
Other things:
Single quotes don't expand variables, '/etc/sysconfig/network-scripts//$1 won't do what you want it to.
echo "" is equivalent to echo.
foo='bar'; echo "blah" echo -n $foo will output 'blah echo -n bar'.
exit exits the script, I'm not sure that's what you think it does.
[y/N] usually means N by default (the capital letter).
Also, you then ask to enter Y or n. Be consistent!
When using a variable as a parameter, double quote it. This ensures it stays the way it is (as one parameter, and not expanded by the shell).
I can't see the closing curly brace of function rangechecks
Also, don't indent the shebang in the first line.

find string in file using bash

I need to find strings matching some regexp pattern and represent the search result as array for iterating through it with loop ), do I need to use sed ? In general I want to replace some strings but analyse them before replacing.
Using sed and diff:
sed -i.bak 's/this/that/' input
diff input input.bak
GNU sed will create a backup file before substitutions, and diff will show you those changes. However, if you are not using GNU sed:
mv input input.bak
sed 's/this/that/' input.bak > input
diff input input.bak
Another method using grep:
pattern="/X"
subst=that
while IFS='' read -r line; do
if [[ $line = *"$pattern"* ]]; then
echo "changing line: $line" 1>&2
echo "${line//$pattern/$subst}"
else
echo "$line"
fi
done < input > output
The best way to do this would be to use grep to get the lines, and populate an array with the result using newline as the internal field separator:
#!/bin/bash
# get just the desired lines
results=$(grep "mypattern" mysourcefile.txt)
# change the internal field separator to be a newline
IFS=$'/n'
# populate an array from the result lines
lines=($results)
# return the third result
echo "${lines[2]}"
You could build a loop to iterate through the results of the array, but a more traditional and simple solution would just be to use bash's iteration:
for line in $lines; do
echo "$line"
done
FYI: Here is a similar concept I created for fun. I thought it would be good to show how to loop a file and such with this. This is a script where I look at a Linux sudoers file check that it contains one of the valid words in my valid_words array list. Of course it ignores the comment "#" and blank "" lines with sed. In this example, we would probably want to just print the Invalid lines only but this script prints both.
#!/bin/bash
# -- Inspect a sudoer file, look for valid and invalid lines.
file="${1}"
declare -a valid_words=( _Alias = Defaults includedir )
actual_lines=$(cat "${file}" | wc -l)
functional_lines=$(cat "${file}" | sed '/^\s*#/d;/^\s*$/d' | wc -l)
while read line ;do
# -- set the line to nothing "" if it has a comment or is empty line.
line="$(echo "${line}" | sed '/^\s*#/d;/^\s*$/d')"
# -- if not set to nothing "", check if the line is valid from our list of valid words.
if ! [[ -z "$line" ]] ;then
unset found
for each in "${valid_words[#]}" ;do
found="$(echo "$line" | egrep -i "$each")"
[[ -z "$found" ]] || break;
done
[[ -z "$found" ]] && { echo "Invalid=$line"; sleep 3; } || echo "Valid=$found"
fi
done < "${file}"
echo "actual lines: $actual_lines funtional lines: $functional_lines"

string select and append to another string

I have a file (FreshPIN.txt) contain lots of pin code in each line; I need a bash script to select one of the pin, print it out, and then remove it from the source file, adding it to end of another file (usedPIN.txt).
FreshpPIN.txt is like:
========
1111111111111111
2222222222222222
3333333333333333
....
nnnnnnnnnnnnnnnn
========
before it prints, I should be asked to enter a number from 0 to 31 and put the number in the command below:
at&g**00**=xtd*788*1111111111111111#
in above example at&g and =xtd*788* should be stable in all output commands.
fresh=FreshPIN.txt
used=usedPin.txt
echo "Please key in"
read key
pin=`head -1 "$fresh"`
printf '%s\n' "$pin" >>"$used"
sed -i~ 1d "$fresh"
printf 'at&g%s=xtd*788*%s\n' "$key" "$pin"
How about this?
#!/bin/bash
fresh=FreshpPIN.txt
used=usedPIN.txt
max=31
die() {
echo >&2 "$#"
exit 1
}
# Get a random pin
pin=$(sed -n '/[[:digit:]]\+/p' -- "$fresh" | shuf -n1)
[[ "$pin" ]] || die "No more pins in file \`$fresh'"
echo "Pin chosen: $pin"
# Prompt user:
while read -e -r -p "Enter a number between 0 and $max (q to quit): " n; do
if [[ "$n" = q ]]; then
echo "Aborting. Pin $pin remains in file \`$fresh'."
exit 0
elif [[ "$n" != +([[:digit:]]) ]]; then
echo "Not a valid number. Try again."
elif ((10#$n>max)); then
echo "Number must be between 0 and $max. Try again."
else
break
fi
done
# Guard if read fails (e.g., if user presses Ctrl-D)
[[ "$n" ]] || die "Something went wrong."
# Delete this pin from file
ed -s -- "$fresh" <<EOF
/^$pin\$/d
wq
EOF
# Save pin in file
printf >> "$used" "%s\n" "$pin"
# Output:
printf "at&g**%02d**=xtd*788*%s\n" "$((10#$n))" "$pin"
It's quite robust (the user must really enter a number between 0 and 31, and it won't be messed up if user enters, e.g., 09). Uses ed to delete old pin from file FreshpPIN.txt: very efficient (no auxiliary file or ugly stuff using sed -i). Uses good bash practice overall. Uses shuf to get a random pin (don't need to compute the number of lines and hack ugly stuff around to get a random pin). sed is used to select only pins from file FreshpPIN.txt, so you can leave your header, comment, etc. in there.

Resources