Read in file line by line and search another file for a line with a partial match - linux

I have a file with partial matches to lines in another file. In order to do this I was looking to generate a while loop with read and substituting a variable for each line of partial matches into a grep command to search a database files with a partial match but for some reason, I am not getting an output (an empty outputfile.txt).
Here is my current script
while read -r line; do
grep $line /path/to/databasefile >> /path/to/folder/outputfile.txt
done < "/partial_matches.txt"
the database has multiple lines with a sequence name then DNA sequence after:
>transcript_ab
AGTCAGTCATGTC
>transcript_ac
AGTCAGTCATGTC
>transctipt_ad
AGTCAGTCATGTC
and the partial matching search file has lines of text:
ab
ac
and I'm looking for a return of:
>transcript_ab
>transcript_ac
any help would be appreciated. Thanks.

If you are using GNU grep, then its -f option is what you are looking for:
grep -f /partial_matches.txt /path/to/databasefile
(if you don't have any pattern in partial_matches.txt but only strings, then use grep -F instead of grep)

you can use a for loop instead:
for i in $(cat partial_matches.txt); do
grep $i /path/to/databasefile >> /path/to/folder/outputfile.txt
done
Also, check if you have a typo:
"/partial_matches.txt" -> "./partial_matches.txt"

Related

List strings not found with grep

I want to search in a file for specific words and then show only the words that were not found. So far, with my limited skills in this area, I can find which words are found:
egrep -w "^bower_components|^npm-debug.log" .gitignore
If my .gitignore file contained bower_components but not npm-debug.log it would return bower_components. I want to find out how to ask it to return only the part of the pattern that was not found. That is, I want it to return only npm-debug.log. I don't want to see all of the text from the file that does not match the search, only the single word from the file that was not found. How do I do that?
I don't have a one liner for you but a short script would work for your use case.
file=$1
shift
for var in "$#"
do
found=`grep "^$var" $file`
if [[ -z $found ]]
then
echo $var
fi
done
Explanation, take the first argument as the file name, any subsequent arguments as strings to test for in the file. Loop through those arguments and test the result of the grep command to see if it returned empty. If it did then the string wasn't found and we print it to the console.
Usage: [script name] .gitignore bower_components npm-debug.log
I don't think this can be done with grep alone; here's a clumsy awk one-liner:
awk -v p1="npm-debug.log" -v p2="bower-component" 'BEGIN{a[p1]=0;a[p2]=0} $0 ~ p1 {a[p1]+=1}; $0 ~ p2 {a[p2]+=1}; END{for(i in a){if( a[i]==0){print i}}}' .gitignore
I'm passing the search pattern into awk as variables, pre-populating an array using the patterns as index, incrementing them if they match and only print ones that didn't have hits (still are 0).
If you have just simple words like in your example, you could do it with two grep commands in one-liner. For example:
file:
first line
_another line
just another line
STRANGE LINE
last line
words:
first
last
row
other
^another
LINE$
we can first get the matched patterns
> grep -owf words file
first
LINE
last
and then we grep -v for them in words. Also add a uniq in the middle to remove any duplicates.
so finally we get patterns from words not matched in file:
> grep -owf words file | uniq | grep -vf - words
row
other
^another

How can I remove the last character of a file in unix?

Say I have some arbitrary multi-line text file:
sometext
moretext
lastline
How can I remove only the last character (the e, not the newline or null) of the file without making the text file invalid?
A simpler approach (outputs to stdout, doesn't update the input file):
sed '$ s/.$//' somefile
$ is a Sed address that matches the last input line only, thus causing the following function call (s/.$//) to be executed on the last line only.
s/.$// replaces the last character on the (in this case last) line with an empty string; i.e., effectively removes the last char. (before the newline) on the line.
. matches any character on the line, and following it with $ anchors the match to the end of the line; note how the use of $ in this regular expression is conceptually related, but technically distinct from the previous use of $ as a Sed address.
Example with stdin input (assumes Bash, Ksh, or Zsh):
$ sed '$ s/.$//' <<< $'line one\nline two'
line one
line tw
To update the input file too (do not use if the input file is a symlink):
sed -i '$ s/.$//' somefile
Note:
On macOS, you'd have to use -i '' instead of just -i; for an overview of the pitfalls associated with -i, see the bottom half of this answer.
If you need to process very large input files and/or performance / disk usage are a concern and you're using GNU utilities (Linux), see ImHere's helpful answer.
truncate
truncate -s-1 file
Removes one (-1) character from the end of the same file. Exactly as a >> will append to the same file.
The problem with this approach is that it doesn't retain a trailing newline if it existed.
The solution is:
if [ -n "$(tail -c1 file)" ] # if the file has not a trailing new line.
then
truncate -s-1 file # remove one char as the question request.
else
truncate -s-2 file # remove the last two characters
echo "" >> file # add the trailing new line back
fi
This works because tail takes the last byte (not char).
It takes almost no time even with big files.
Why not sed
The problem with a sed solution like sed '$ s/.$//' file is that it reads the whole file first (taking a long time with large files), then you need a temporary file (of the same size as the original):
sed '$ s/.$//' file > tempfile
rm file; mv tempfile file
And then move the tempfile to replace the file.
Here's another using ex, which I find not as cryptic as the sed solution:
printf '%s\n' '$' 's/.$//' wq | ex somefile
The $ goes to the last line, the s deletes the last character, and wq is the well known (to vi users) write+quit.
After a whole bunch of playing around with different strategies (and avoiding sed -i or perl), the best way i found to do this was with:
sed '$! { P; D; }; s/.$//' somefile
If the goal is to remove the last character in the last line, this awk should do:
awk '{a[NR]=$0} END {for (i=1;i<NR;i++) print a[i];sub(/.$/,"",a[NR]);print a[NR]}' file
sometext
moretext
lastlin
It store all data into an array, then print it out and change last line.
Just a remark: sed will temporarily remove the file.
So if you are tailing the file, you'll get a "No such file or directory" warning until you reissue the tail command.
EDITED ANSWER
I created a script and put your text inside on my Desktop. this test file is saved as "old_file.txt"
sometext
moretext
lastline
Afterwards I wrote a small script to take the old file and eliminate the last character in the last line
#!/bin/bash
no_of_new_line_characters=`wc '/root/Desktop/old_file.txt'|cut -d ' ' -f2`
let "no_of_lines=no_of_new_line_characters+1"
sed -n 1,"$no_of_new_line_characters"p '/root/Desktop/old_file.txt' > '/root/Desktop/my_new_file'
sed -n "$no_of_lines","$no_of_lines"p '/root/Desktop/old_file.txt'|sed 's/.$//g' >> '/root/Desktop/my_new_file'
opening the new_file I created, showed the output as follows:
sometext
moretext
lastlin
I apologize for my previous answer (wasn't reading carefully)
sed 's/.$//' filename | tee newFilename
This should do your job.
A couple perl solutions, for comparison/reference:
(echo 1a; echo 2b) | perl -e '$_=join("",<>); s/.$//; print'
(echo 1a; echo 2b) | perl -e 'while(<>){ if(eof) {s/.$//}; print }'
I find the first read-whole-file-into-memory approach can be generally quite useful (less so for this particular problem). You can now do regex's which span multiple lines, for example to combine every 3 lines of a certain format into 1 summary line.
For this problem, truncate would be faster and the sed version is shorter to type. Note that truncate requires a file to operate on, not a stream. Normally I find sed to lack the power of perl and I much prefer the extended-regex / perl-regex syntax. But this problem has a nice sed solution.

How to delete 5 lines before and 6 lines after pattern match using Sed?

I want to search for a pattern "xxxx" in a file and delete 5 lines before this pattern and 6 lines after this match. How can i do this using Sed?
This might work for you (GNU sed):
sed ':a;N;s/\n/&/5;Ta;/xxxx/!{P;D};:b;N;s/\n/&/11;Tb;d' file
Keep a rolling window of 5 lines and on encountering the specified string add 6 more (11 in total) and delete.
N.B. This is a barebones solution and will most probably need tailoring to your specific needs. Questions such as: what if there are multiple string throughout the file? What if the string is within the first five lines or multiple strings are within five lines of each other etc etc etc.
Here's one way you could do it using awk. I assume that you also want to delete the line itself and that the file is small enough to fit into memory:
awk '{a[NR]=$0}/xxxx/{f=NR}END{for(i=1;i<=NR;++i)if(i<f-5||i>f+6)print a[i]}' file
Store every line into the array a. When the pattern /xxxx/ is matched, save the line number. After the whole file has been processed, loop through the array, only printing the lines you want to keep.
Alternatively, you can use grep to obtain the line number first:
grep -n 'xxxx' file | awk -F: 'NR==FNR{f=$1}NR<f-5||NR>f+6' - file
In both cases, the lines deleted will be surrounding the last line where the pattern is matched.
A third option would be to use grep to obtain the line number then use sed to delete the lines:
line=$(grep -nm1 'xxxx' file | cut -d: -f1)
sed "$((line-5)),$((line+6))d" file
In this case I've also added the -m switch so grep exits after finding the first match.
if you know, the line number (what is not difficult to obtain), you can use something like that:
filename="test"
start=`expr $curr_line - 5`
end=`expr $curr_line + 6`
sed "${start},${end}d" $filename (optionally sed -i)
of course, you have to remember about additional conditions like start shouldn't be less than 1 and end greater than number of lines in file.
Another - maybe more easy to follow - solution would be to use grep to find the keyword and the corresponding line:
grep -n 'KEYWORD' <file>
then use sed to get the line number only like this:
grep -n 'KEYWORD' <file> | sed 's/:.*//'
Now that you have the line number simply use sed like this:
sed -i "$(LINE_START),$(LINE_END) d" <file>
to remove lines before and/or after! With only the -i you will override the <file> (no backup).
A script example could be:
#!/bin/bash
KEYWORD=$1
LINES_BEFORE=$2
LINES_AFTER=$3
FILE=$4
LINE_NO=$(grep -n $KEYWORD $FILE | sed 's/:.*//' )
echo "Keyword found in line: $LINE_NO"
LINE_START=$(($LINE_NO-$LINES_BEFORE))
LINE_END=$(($LINE_NO+$LINES_AFTER))
echo "Deleting lines $LINE_START to $LINE_END!"
sed -i "$LINE_START,$LINE_END d" $FILE
Please note that this will work only if the keyword is found once! Adapt the script to your needs!

Quickest way to remove 70+ strings from a file?

I have 70+ strings I need to find and delete in a file. I need to remove the entire line in the file that the string appears in.
I know I can use sed -i '/string to remove/d' fileA.txt to remove them one at a time. However, considering I have 70+, it will take some time doing it this way.
Is there a way I can put these 70+ strings in a file and have sed go through them one by one? Or if I create a file containing the strings, is there a way to compare the two files so it removes any line from fileA that contains one of the strings?
You could use grep:
grep -vf file_with_words.txt file.txt
where file_with_words.txt would be the file containing the list of words, each word being on a different line and file.txt is the file that you want to remove the lines from.
If your list of words contains regex metacharacters, then tell grep to consider those as fixed strings (if that is what you want):
grep -F -vf file_with_words.txt file.txt
Using sed, you'd need to say:
sed '/word1\|word2\|word3/d' file.txt
or
sed -E '/word1|word2|word3/d' file.txt
You could use command substitution to construct the pattern too:
sed -E "/$(paste -sd'|' file_with_words.txt)/d" file.txt
but grep is clearly the tool to use in this case.
If you want to do the job in bash, here's how:
search=fileA.txt
queries=queries.txt
while read query
do
sed -i '' "/$query/d" $search
done < "$queries"
where queries.txt looks like
I
want
to
delete
these
lines

bash: check if multiple files in a directory contain strings from a list

Folks,
I have a text file which contains multiple lines with one string per line :
str1
str2
str3
etc..
I would like to read every line of this file and then search for those strings inside multiple files located in a different directory.
I am not quite sure how to proceed.
Thanks very much for your help.
awk 'NR==FNR{a[$0];next} { for (word in a) if ($0 ~ word) print FILENAME, $0 }' fileOfWords /wherever/dir/*
for wrd in $(cut -d, -f1 < testfile.txt); do grep $wrd dir/files* ; done
Use the GNU Grep's --file Option
According to grep(1):
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file
contains zero patterns, and therefore matches nothing. (-f is
specified by POSIX.)
The -H and -n flags will print the filename and line number of each match. So, assuming you store your patterns in /tmp/foo and want to search all files in /tmp/bar, you could use something like:
# Find regular files with GNU find and grep them all using a pattern
# file.
find /etc -type f -exec grep -Hnf /tmp/foo {} +
while read -r str
do
echo "$str"
grep "$str" /path/to/other/files
done < inputfile

Resources