sed command inside while loop is not working for ubuntu - linux

I have two files; the first includes patterns ( file.txt) that I want to search for in the second file (file.cfg).
Once the patter is found in "file.cfg" I want to remove it + whatever comes after it until next Hello that comes at the beginning of the line.
I have created the below script but its not working :
#! /bin/bash
cat file.txt | while read LINE; do
echo $LINE
sed -i "/^$LINE$/,/^Hello/{//p;d;}" "file.cfg"
sed -i "/^$LINE$/d" "file.cfg"
done
It was working fine yesterday on test files, Today I have modified the file's name, and It stopped working.
I am not sure If I changed something by mistake, but if I will use the below from Ubuntu command line, it works :
sed -i "/^Hello World$/,/^Hello/{//p;d;}" "file.cfg"
Also, I added echo in the loop, and I can see each line in "file.txt"
To provide further information, I will give an example of what I need to achieve with this code :
"file.txt" contains patterns I need to find a match in "file.cfg" once the pattern is found, I need to remove it, and anything comes after it until next Hello.
sed -i "/^$LINE$/,/^Hello/{//p;d;}" "file.cfg" -- > this line should remove anything in between.
sed -i "/^$LINE$/d" "file.cfg" --- > remove the pattern itself.
+++++++++
See the below example :
the File.cfg is divided into sections; each section starts with Hello
File.txt contains random sections name; I need a script to read the section's name from File.txt and see if it's available in file.cfg , then remove the section name and all of its contents
File.txt :
Hello World
Hello Mohammad
Hello Scripting
File.cfg :
Hellow xyz
a
b
c
Hello World
v
b
n
Hello stack
q
w
e
The final results should be :
Hellow xyz
a
b
c
Hello stack
q
w
e
Once the section name is found, I need to remove everything until the next "Hello" that comes at the beginning of a line ( new section ).
None of the lines start with Hello except the sections name.

$ awk 'NR==FNR{names[$0]; next} $1=="Hello"{f=($0 in names)} !f' File.txt File.cfg
Hellow xyz
a
b
c
Hello stack
q
w
e
If you want to do "inplace" editing then just like GNU sed, which you're currently using, has -i, GNU awk has -i inplace but note that you're working with 2 input files so you need to write to both of them:
awk -i inplace 'NR==FNR{names[$0]; print; next} $1=="Hello"{f=($0 in names)} !f' File.txt File.cfg
or only activate inplace editing for the 2nd one, see the gawk man page for how to control that. IMHO just using a temp output file is simpler:
tmp=$(mktemp) &&
awk 'NR==FNR{names[$0]; next} $1=="Hello"{f=($0 in names)} !f' File.txt File.cfg > "$tmp" &&
mv -- "$tmp" File.cfg

I like #tripleee's suggestion to create a sed script from the patterns file. It results in a single pass and sed making sed appeals to my sense of humor :)
The first step is to generate the sed script:
sed 's|.*|/^&$/, /^Hello/ {\n\t/^&$/ d\n\t/^Hello/! d\n}|' file.txt
/^Hello World$/, /^Hello/ {
/^Hello World$/ d
/^Hello/! d
}
/^Hello Mohammad$/, /^Hello/ {
/^Hello Mohammad$/ d
/^Hello/! d
}
/^Hello Scripting$/, /^Hello/ {
/^Hello Scripting$/ d
/^Hello/! d
}
In a nut shell, for each address range we want to delete everything except the ending pattern.
I'll generate the above sed using bash process substitution and treat it like a sed program file (or it could be put in a temp file):
#!/bin/bash
sed -f <(
sed 's|.*|/^&$/, /^Hello/ {\n\t/^&$/ d\n\t/^Hello/! d\n}|' file.txt
) file.cfg
I left out the -i in place edit option for testing.
For non-destructive testing, compare the expected results with the output of the script:
diff expect <(./remove.sh) && echo ok

Related

cat without line breaks: why does tr '\n' not work?

I generated 1000 output files containing a single line with (mistakenly) no line break at the end, so that
cat filnename_* > outfile
generates a file with a single line. I attempted to remedy this using
cat filename_* | tr '\n' ' ' > outfile
but I get exactly the same result - a file with a single line of output. Why doesn't the latter code (which ought to add a line break for each filename_* file) accomplish what I'm trying to do?
I think you could manually append a line break to your 1000 out files, and then cat them all later:
echo | tee -a filename_*
cat filnename_* > outfile
Edit:
Change the first step to echo | tee -a filename_* as #rowboat suggested
If all your files are missing the final linefeed then you can use sed for adding it on the fly:
# with GNU sed
sed '$s/$/\n/' filnename_* > outfile
# with standard sed and bash, zsh, etc...
sed $'$s/$/\\\n/' filnename_* > outfile
# with standard sed and a POSIX shell
sed '$s/$/\
/' filnename_* > outfile
tr '\n' ' ' says to replace each \n with a space; you've already stated the inputs do not contain any \n so the tr does nothing and the final output is just a copy of the input
Setup:
for ((i=1;i<=5;i++))
do
printf 'abcd' > out${i}
done
$ cat out*
abcdabcdabcdabcdabcd
Many commands can process a file and add a \n, it just depends on how much typing you want to do, eg:
$ sed 's/$/&/' out* # or: sed -n '/$/p' out*
abcd
abcd
abcd
abcd
abcd
$ awk '1' out*
abcd
abcd
abcd
abcd
abcd
I'm not coming up with any ideas on how to use cat to append a \n but one idea would be to use a user-defined function; assume we want to name our new function catn (cat and add \n on end):
$ type -a catn # verify name "catn" not currently in use
-bash: type: catn: not found
$ catn() { awk '1' "${#:--}"; } # wrap function definition around the awk solution
$ catn out*
abcd
abcd
abcd
abcd
abcd

Cut matching line and X successive lines until newline and paste into file

I would like to match all lines from a file containing a word, and take all lines under until coming two two newline characters in a row.
I have the following sed code to cut and paste specific lines, but not subsequent lines:
sed 's|.*|/\\<&\\>/{w results\nd}|' teststring | sed -file.bak -f - testfile
How could I modify this to take all subsequent lines?
For example, say I wanted to match lines with 'dog', the following should take the first 3 lines of the 5:
The best kind of an animal is a dog, for sure
-man's best friend
-related to wolves
Racoons are not cute
Is there a way to do this?
This should do:
awk '/dog/ {f=1} /^$/ {f=0} f {print > "new"} !f {print > "tmp"}' file && mv tmp file
It will set f to true if word dog is found, then if a blank line is found set f to false.
If f is true, print to new file.
If f is false, print to tmp file.
Copy tmp file to original file
Edit: Can be shorten some:
awk '/dog/ {f=1} /^$/ {f=0} {print > (f?"new":"tmp")}' file && mv tmp file
Edit2: as requested add space for every section in the new file:
awk '/dog/ {f=1;print ""> "new"} /^$/ {f=0} {print > (f?"new":"tmp")}' file && mv tmp file
If the original files does contains tabs or spaces instead of just a blank line after each dog section, change from /^$/ to /^[ \t]*$/
This might work for you (GNU sed):
sed 's|.*|/\\<&\\>/ba|' stringFile |
sed -f - -e 'b;:a;w resultFile' -e 'n;/^$/!ba' file
Build a set of regexps from the stringFile and send matches to :a. Then write the matched line and any further lines until an empty line (or end of file) to the resultFile.
N.B. The results could be sent directly to resultFile,using:
sed 's#.*#/\\<&\\>/ba#' stringFile |
sed -nf - -e 'b;:a;p;n;/^$/!ba' file > resultFile
To cut the matches from the original file use:
sed 's|.*|/\\<&\\>/ba|' stringFile |
sed -f - -e 'b;:a;N;/\n\s*$/!ba;w resultFile' -e 's/.*//p;d' file
Is this what you're trying to do?
$ awk -v RS= '/dog/' file
The best kind of an animal is a dog, for sure
-man's best friend
-related to wolves
Could you please try following.
awk '/dog/{count="";found=1} found && ++count<4' Input_file > temp && mv temp Input_file

Iterative Bash Script Bug

Using a bash script, I'm trying to iterate through a text file that only has around 700 words, line-by-line, and run a case-insensitive grep search in the current directory using that word on particular files. To break it down, I'm trying to output the following to a file:
Append a newline to a file, then the searched word, then another newline
Append the results of the grep command using that search
Repeat steps 1 and 2 until all words in the list are exhausted
So for example, if I had this list.txt:
search1
search2
I'd want the results.txt to be:
search1:
grep result here
search2:
grep result here
I've found some answers throughout the stack exchanges on how to do this and have come up with the following implementation:
#!/usr/bin/bash
while IFS = read -r line;
do
"\n$line:\n" >> "results.txt";
grep -i "$line" *.in >> "results.txt";
done < "list.txt"
For some reason, however, this (and the numerous variants I've tried) isn't working. Seems trivial, but I'd it's been frustrating me beyond belief. Any help is appreciated.
Your script would work if you changed it to:
while IFS= read -r line; do
printf '\n%s:\n' "$line"
grep -i "$line" *.in
done < list.txt > results.txt
but it'd be extremely slow. See https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice for why you should think long and hard before writing a shell loop just to manipulate text. The standard UNIX tool for manipulating text is awk:
awk '
NR==FNR { words2matches[$0]; next }
{
for (word in words2matches) {
if ( index(tolower($0),tolower(word)) ) {
words2matches[word] = words2matches[word] $0 ORS
}
}
}
END {
for (word in words2matches) {
print word ":" ORS words2matches[word]
}
}
' list.txt *.in > results.txt
The above is untested of course since you didn't provide sample input/output we could test against.
Possible problems:
bash path - use /bin/bash path instead of /usr/bin/bash
blank spaces - remove ' ' after IFS
echo - use -e option for handling escape characters (here: '\n')
semicolons - not required at end of line
Try following script:
#!/bin/bash
while IFS= read -r line; do
echo -e "$line:\n" >> "results.txt"
grep -i "$line" *.in >> "results.txt"
done < "list.txt"
You do not even need to write a bash script for this purpose:
INPUT FILES:
$ more file?.in
::::::::::::::
file1.in
::::::::::::::
abc
search1
def
search3
::::::::::::::
file2.in
::::::::::::::
search2
search1
abc
def
::::::::::::::
file3.in
::::::::::::::
abc
search1
search2
def
search3
PATTERN FILE:
$ more patterns
search1
search2
search3
CMD:
$ grep -inf patterns file*.in | sort -t':' -k3 | awk -F':' 'BEGIN{OFS=FS}{if($3==buffer){print $1,$2}else{print $3; print $1,$2}buffer=$3}'
OUTPUT:
search1
file1.in:2
file2.in:2
file3.in:2
search2
file2.in:1
file3.in:3
search3
file1.in:4
file3.in:5
EXPLANATIONS:
grep -inf patterns file*.in will grep all the file*.in with all the patterns located in patterns file thanks to -f option, using -i forces insensitive case, -n will add the line numbers
sort -t':' -k3 you sort the output with the 3rd column to regroup patterns together
awk -F':' 'BEGIN{OFS=FS}{if($3==buffer){print $1,$2}else{print $3; print $1,$2}buffer=$3}' then awk will print the display that you want by using : as Field Separator and Output Field Separator, you use a buffer variable to save the pattern (3rd field) and you print the pattern whenever it changes ($3!=buffer)

Replace string in a file from a file [duplicate]

This question already has answers here:
Difference between single and double quotes in Bash
(7 answers)
Closed 5 years ago.
I need help with replacing a string in a file where "from"-"to" strings coming from a given file.
fromto.txt:
"TRAVEL","TRAVEL_CHANNEL"
"TRAVEL HD","TRAVEL_HD_CHANNEL"
"FROM","TO"
First column is what to I'm searching for, which is to be replaced with the second column.
So far I wrote this small script:
while read p; do
var1=`echo "$p" | awk -F',' '{print $1}'`
var2=`echo "$p" | awk -F',' '{print $2}'`
echo "$var1" "AND" "$var2"
sed -i -e 's/$var1/$var2/g' test.txt
done <fromto.txt
Output looks good (x AND y), but for some reason it does not replace the first column ($var1) with the second ($var2).
test.txt:
"TRAVEL"
Output:
"TRAVEL" AND "TRAVEL_CHANNEL"
sed -i -e 's/"TRAVEL"/"TRAVEL_CHANNEL"/g' test.txt
"TRAVEL HD" AND "TRAVEL_HD_CHANNEL"
sed -i -e 's/"TRAVEL HD"/"TRAVEL_HD_CHANNEL"/g' test.txt
"FROM" AND "TO"
sed -i -e 's/"FROM"/"TO"/g' test.txt
$ cat test.txt
"TRAVEL"
input:
➜ cat fromto
TRAVEL TRAVEL_CHANNEL
TRAVELHD TRAVEL_HD
➜ cat inputFile
TRAVEL
TRAVELHD
The work:
➜ awk 'BEGIN{while(getline < "fromto") {from[$1] = $2}} {for (key in from) {gsub(key,from[key])} print}' inputFile > output
and output:
➜ cat output
TRAVEL_CHANNEL
TRAVEL_CHANNEL_HD
➜
This first (BEGIN{}) loads your input file into an associate array: from["TRAVEL"] = "TRAVEL_HD", then rather inefficiently performs search and replace line by line for each array element in the input file, outputting the results, which I piped to a separate outputfile.
The caveat, you'll notice, is that the search and replaces can interfere with each other, the 2nd line of output being a perfect example since the first replacement happens. You can try ordering your replacements differently, or use a regex instead of a gsub. I'm not certain if awk arrays are guaranteed to have a certain order, though. Something to get you started, anyway.
2nd caveat. There's a way to do the gsub for the whole file as the 2nd step of your BEGIN and probably make this much faster, but I'm not sure what it is.
you can't do this oneshot you have to use variables within a script
maybe something like below sed command for full replacement
-bash-4.4$ cat > toto.txt
1
2
3
-bash-4.4$ cat > titi.txt
a
b
c
-bash-4.4$ sed 's|^\s*\(\S*\)\s*\(.*\)$|/^\2\\>/s//\1/|' toto.txt | sed -f - titi.txt > toto.txt
-bash-4.4$ cat toto.txt
a
b
c
-bash-4.4$

bash script append text to first line of a file

I want to add a text to the end of the first line of a file using a bash script.
The file is /etc/cmdline.txt which does not allow line breaks and needs new commands seperated by a blank, so text i want to add realy needs to be in first line.
What i got so far is:
line=' bcm2708.w1_gpio_pin=20'
file=/boot/cmdline.txt
if ! grep -q -x -F -e "$line" <"$file"; then
printf '%s' "$line\n" >>"$file"
fi
But that appends the text after the line break of the first line, so the result is wrong.
I either need to trim the file contend, add my text and a line feed or somehow just add it to first line of file not touching the rest somehow, but my knowledge of bash scripts is not good enough to find a solution here, and all the examples i find online add beginning/end of every line in a file, not just the first line.
This sed command will add 123 to end of first line of your file.
sed ' 1 s/.*/&123/' yourfile.txt
also
sed '1 s/$/ 123/' yourfile.txt
For appending result to the same file you have to use -i switch :
sed -i ' 1 s/.*/&123/' yourfile.txt
This is a solution to add "ok" at the first line on /etc/passwd, I think you can use this in your script with a little bit of 'tuning' :
$ awk 'NR==1{printf "%s %s\n", $0, "ok"}' /etc/passwd
root:x:0:0:root:/root:/bin/bash ok
To edit a file, you can use ed, the standard editor:
line=' bcm2708.w1_gpio_pin=20'
file=/boot/cmdline.txt
if ! grep -q -x -F -e "$line" <"$file"; then
ed -s "$file" < <(printf '%s\n' 1 a "$line" . 1,2j w q)
fi
ed's commands:
1: go to line 1
a: append (this will insert after the current line)
We're in insert mode and we're inserting the expansion of $line
.: stop insert mode
1,2j join lines 1 and 2
w: write
q: quit
This can be used to append a variable to the first line of input:
awk -v suffix="$suffix" '{print NR==1 ? $0 suffix : $0}'
This will work even if the variable could potentially contain regex formatting characters.
Example:
suffix=' [first line]'
cat input.txt | awk -v suffix="$suffix" '{print NR==1 ? $0 suffix : $0}' > output.txt
input.txt:
Line 1
Line 2
Line 3
output.txt:
Line 1 [first line]
Line 2
Line 3

Resources