Want all of the line with one single command - IFS - linux

I have a script who recup the UID of a line in /etc/passwd, but only if the UID > 500. It's work but... i want to recup all of the line with only one command, and i don't know if it's possible.
Let me show you my code :
#!/bin/bash
while IFS=: read -r f1 f2 f3 f4 f5 f6 f7
do
if [ $f3 -gt 500 ]
then
echo "$f1:$f2:$f3:$f4:$f5:$f6:$f7" <<< there is a single command for that ?
fi
done < /etc/passwd
Thanks for your respond :)

Try
awk -F: '$3>=500 {print $0}' /etc/passwd

Use an array:
#!/bin/bash
while IFS=: read -r -a f; do
if (( ${f[2]} > 500 )); then
IFS=: b="${f[*]}"
echo "$b"
fi
done < /etc/passwd

I would read the entire line into a single variable, then split inside the loop.
while read line; do
old=IFS
IFS=:
set -- $line
IFS=$old
test $3 -gt 500 || continue
printf "%s\n" "$line"
done </etc/passwd

A bash solution:
while read line
do
arr=(${line//:/ })
[ ${arr[2]} -gt 500 ] && echo $line
done < /etc/passwd
Splitting the entire record into an array and checking the 2nd index element which contains the user-id.

Related

Shell script: Syntax error: Redirection unexpected:what is the alternate of "<<<" in shell?

Here is my code:
f="\bkg\inp.txt"
while IFS='' read -r line || [[ -n "$line" ]]; do
IFS='|' read -r -a array <<< "$line"
#echo "${array[0]}"
if [ "${array[2]}" == "Sangamithra" ]; then
printf "%s" "${array[2]}|" "${array[1]}|"${array[0]}"
fi
done < "$f"
I know what the error is...
Since I am new to shell scripting..
I found the code in stackoverflow where we split a string and put into array - But the <<< part is causing problem showing Syntax error : Redirection unexpected..
Now wht should be the replacement for "<<<" in shell script or I have to choose any other way?I am not very much aware of all the syntaxes so could not replace..
Any help is very much appreaciated!!
The alternative for <<< "STRING" is echo "STRING" but if you pipe
the output of echo to the read command as echo "$line" | read ..,
read will be invoked in subshell,
in which created variables are not accessible from outside of the subshell.
Then please try:
f="\bkg\inp.txt"
while IFS='' read -r line || [[ -n "$line" ]]; do
IFS='|' read -r -a array < <(echo "$line")
if [ "${array[2]}" == "Sangamithra" ]; then
printf "%s" "${array[2]}|${array[1]}|${array[0]}"
fi
done < "$f"
If you are executing sh, not bash, the script
above will also complain about the redirect.
Then please consider to switch to bash, or try the following sh compliant
version:
#!/bin/sh
f="\bkg\inp.txt"
while IFS='' read -r line || [ -n "$line" ]; do
IFS="|" set -- $line
if [ "$3" = "Sangamithra" ]; then
printf "%s" "$3|$2|$1"
fi
done < "$f"
Hope this helps.

Unix - Replace column value inside while loop

I have comma separated (sometimes tab) text file as below:
parameters.txt:
STD,ORDER,ORDER_START.xml,/DML/SOL,Y
STD,INSTALL_BASE,INSTALL_START.xml,/DML/IB,Y
with below code I try to loop through the file and do something
while read line; do
if [[ $1 = "$(echo "$line" | cut -f 1)" ]] && [[ "$(echo "$line" | cut -f 5)" = "Y" ]] ; then
//do something...
if [[ $? -eq 0 ]] ; then
// code to replace the final flag
fi
fi
done < <text_file_path>
I wanted to update the last column of the file to N if the above operation is successful, however below approaches are not working for me:
sed 's/$f5/N/'
'$5=="Y",$5=N;{print}'
$(echo "$line" | awk '$5=N')
Update: Few considerations which need to be considered to give more clarity which i missed at first, apologies!
The parameters file may contain lines with last field flag as "N" as well.
Final flag needs to be update only if "//do something" code has successfully executed.
After looping through all lines i.e, after exiting "while loop" flags for all rows to be set to "Y"
perhaps invert the operations do processing in awk.
$ awk -v f1="$1" 'BEGIN {FS=OFS=","}
f1==$1 && $5=="Y" { // do something
$5="N"}1' file
not sure what "do something" operation is, if you need to call another command/script it's possible as well.
with bash:
(
IFS=,
while read -ra fields; do
if [[ ${fields[0]} == "$1" ]] && [[ ${fields[4]} == "Y" ]]; then
# do something
fields[4]="N"
fi
echo "${fields[*]}"
done < file | sponge file
)
I run that in a subshell so the effects of altering IFS are localized.
This uses sponge to write back to the same file. You need the moreutils package to use it, otherwise use
done < file > tmp && mv tmp file
Perhaps a bit simpler, less bash-specific
while IFS= read -r line; do
case $line in
"$1",*,Y)
# do something
line="${line%Y}N"
;;
esac
echo "$line"
done < file
To replace ,N at the end of the line($) with ,Y:
sed 's/,N$/,Y/' file

Read from two files in simultaneously in bash when they may be missing trailing newlines

I have two text files that I'm trying to read through line by line at the same time. The files do not necessarily have the same number of lines, and the script should stop reading when it reaches the end of either file. I'd like to keep this as "pure" bash as possible. Most of the solutions I've found for doing this suggest something of the form:
while read -r f1 && read -r f2 <&3; do
echo "$f1"
echo "$f2"
done < file1 3<file2
However this fails if the last line of a file does not have a newline.
If I were only reading one file I would do something like:
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "$line"
done < file1
but when I try to extend this to reading multiple files by doing things like:
while IFS='' read -r f1 || [[ -n "$f1" ]] && read -r f2 <&3 || [[ -n "$f2" ]]; do
echo "$f1"
echo "$f2"
done < file1 3<file2
or
while IFS='' read -r f1 && read -r f2 <&3 || [[ -n "$f1" || -n "$f2" ]]; do
echo "$f1"
echo "$f2"
done < file1 3<file2
I can't seem to get the logic right and the loop either doesn't terminate or finishes without reading the last line.
I can get the desired behavior using:
while true; do
read -r f1 <&3 || if [[ -z "$f1" ]]; then break; fi;
read -r f2 <&4 || if [[ -z "$f2" ]]; then break; fi;
echo "$f1"
echo "$f2"
done 3<file1 4<file2
However this doesn't seem to match the normal (?)
while ... read ...; do
...
done
idiom that I see for reading files.
Is there a better way to simultaneously read from two files that might have differing numbers of lines and last lines that are not newline terminated?
What are the potential drawbacks of my way of reading the files?
You can override the precedence of the && and || operators by grouping them with { }:
while { IFS= read -r f1 || [[ -n "$f1" ]]; } && { IFS= read -r f2 <&3 || [[ -n "$f2" ]]; }; do
Some notes: You can't use ( ) for grouping in this case because that forces its contents to run in a subshell, and variables read in subshells aren't available in the parent shell. Also, you need a ; before each }, so the shell can tell it isn't just a parameter to the last command. Finally, you need IFS= (or equivalently IFS='') before each read command, since assignments given as a prefix to a command apply only to that one command.

pass line to function - overwrite using sed line by line - BASH

why this code not work? What's the problem of passing $line to a function?
function a {
echo $1 | grep $2
}
while read -r line; do
a $line "LAN"
done < database.txt
Another question, i have to overwrite line by line a txt file possibly using sed command.But not all the line, only the part to change. Something like this:
while read -r line; do
echo $line | sed "s/STRING1/STRING2/"
done < namefile
EDIT
I give you an example for my second question.
input file:
LAN 1:
[text]11111[text]
[text]22222[text]
[text]33333[text]
LAN 2:
[text]11111[text]
[text]22222[text]
[text]33333[text]
output file:
LAN 1:
[text]44444[text]
[text]22222[text]
[text]33333[text]
LAN 2:
[text]11111[text]
[text]22222[text]
[text]33333[text]
I have to overwrite database.txt so i think to do this line by line using a counter for LAN. This is my code:
while read -r line; do
echo "$line" | grep -q LAN
if [ $? = "0" ]; then
net_count=$((net_count+1))
fi
if [ $net_count = <lan choose before> ]; then # variable that contains lan number chosen by user
echo "$line" | fgrep -q "11111"
if [ "$?" = "0" ]; then
echo $line | sed "s/11111/44444/" > database.txt
break
fi
fi
done < database.txt
Thank you all
Running sed once for every line is typically over a thousand times slower than running just one sed instance processing your whole file of input. Don't do it.
If you want to do string manipulation on a line-by-line basis, use native bash primitives for the purpose, as documented in BashFAQ #100:
a() {
local line=$1 regex=$2
if [[ $line =~ $regex ]]; then
printf '%s\n' "$line"
fi
}
while IFS= read -r line; do
a "$line" LAN
done <database.txt
Likewise, for substring replacements, an appropriate parameter expansion primitive exists:
while read -r line; do
printf '%s\n' "${line//STRING1/STRING2}"
done < namefile
That said, those approaches are only appropriate if you need to iterate line-by-line. Typically, it makes more sense to use a single grep or sed operation, and iterate over the results of those calls if you need to do native bash operations. For instance, the following iterates over the output from grep, as emitted by a process substitution:
regex=LAN
while IFS= read -r line; do
echo "Read line from grep: $line"
done < <(grep -e "$regex" <database.txt)
You need to put $line in quotes for the whole line to be considered as a single parameter. Else, every space character splits the strings as multiple parameter to the function:
#!/bin/bash
function a {
echo "$1" | grep "$2"
}
while read -r line; do
a "$line" "LAN"
done < database.txt
And for the second question, if you want to print only the lines that you modify, you can use the following code:
while read -r line; do
echo "$line" | sed -n "s/STRING1/STRING2/p"
done < namefile
Here, -n will omit the lines that do not match the string; and the flag p makes sure to print the lines that match.

find string in file using bash

I need to find strings matching some regexp pattern and represent the search result as array for iterating through it with loop ), do I need to use sed ? In general I want to replace some strings but analyse them before replacing.
Using sed and diff:
sed -i.bak 's/this/that/' input
diff input input.bak
GNU sed will create a backup file before substitutions, and diff will show you those changes. However, if you are not using GNU sed:
mv input input.bak
sed 's/this/that/' input.bak > input
diff input input.bak
Another method using grep:
pattern="/X"
subst=that
while IFS='' read -r line; do
if [[ $line = *"$pattern"* ]]; then
echo "changing line: $line" 1>&2
echo "${line//$pattern/$subst}"
else
echo "$line"
fi
done < input > output
The best way to do this would be to use grep to get the lines, and populate an array with the result using newline as the internal field separator:
#!/bin/bash
# get just the desired lines
results=$(grep "mypattern" mysourcefile.txt)
# change the internal field separator to be a newline
IFS=$'/n'
# populate an array from the result lines
lines=($results)
# return the third result
echo "${lines[2]}"
You could build a loop to iterate through the results of the array, but a more traditional and simple solution would just be to use bash's iteration:
for line in $lines; do
echo "$line"
done
FYI: Here is a similar concept I created for fun. I thought it would be good to show how to loop a file and such with this. This is a script where I look at a Linux sudoers file check that it contains one of the valid words in my valid_words array list. Of course it ignores the comment "#" and blank "" lines with sed. In this example, we would probably want to just print the Invalid lines only but this script prints both.
#!/bin/bash
# -- Inspect a sudoer file, look for valid and invalid lines.
file="${1}"
declare -a valid_words=( _Alias = Defaults includedir )
actual_lines=$(cat "${file}" | wc -l)
functional_lines=$(cat "${file}" | sed '/^\s*#/d;/^\s*$/d' | wc -l)
while read line ;do
# -- set the line to nothing "" if it has a comment or is empty line.
line="$(echo "${line}" | sed '/^\s*#/d;/^\s*$/d')"
# -- if not set to nothing "", check if the line is valid from our list of valid words.
if ! [[ -z "$line" ]] ;then
unset found
for each in "${valid_words[#]}" ;do
found="$(echo "$line" | egrep -i "$each")"
[[ -z "$found" ]] || break;
done
[[ -z "$found" ]] && { echo "Invalid=$line"; sleep 3; } || echo "Valid=$found"
fi
done < "${file}"
echo "actual lines: $actual_lines funtional lines: $functional_lines"

Resources