I would like to diff two very large files (multi-GB), using linux command line tools, and see the line numbers of the differences. The order of the data matters.
I am running on a Linux machine and the standard diff tool gives me the "memory exhausted" error. -H had no effect.
In my application, I only need to stream the diff results. That is, I just want to visually look at the first few differences, I don't need to inspect the entire file. If there are differences, a quick glance will tell me what is wrong.
'comm' seems well suited to this, but it does not display line numbers of the differences.
In general, my multi-GB files only have a few hundred lines that are different, the rest of the file is the same.
Is there a way to get comm to dump the line number? Or a way to make diff run without loading the entire file into memory? (like cutting the input files into 1k blocks, without actually creating a million 1k-files in my filesystem and cluttering everything up)?
I won't use comm, but as you said WHAT you need, in addition to HOW you thought you should do it, I'll focus on the "WHAT you need" instead :
An interesting way would be to use paste and awk : paste can show 2 files "side by side" using a separator. If you use \n as separator, it display the 2 files with line 1 of each , followed by line 2 of each etc.
So the script you could use could be simply (once you know that there are the same number of lines in each files) :
paste -d '\n' /tmp/file1 /tmp/file2 | awk '
NR%2 { linefirstfile=$0 ; }
!(NR%2) { if ( $0 != linefirstfile )
{ print "line",NR/2,": "; print linefirstfile ; print $0 ; } }'
(Interrestingly, this solution will allow be easily extended to do a diff of N files in a single read, whatever the sizes of the N files are ... just adding a check that all have the same amount of lines before doing the comparison steps (otherwise "paste" will in the end show only lines from the bigger files))
Here is a (short) example, to show how it works:
$ cat > /tmp/file1
A
C %FORGOT% fmsdflmdflskdf dfldksdlfkdlfkdlkf
E
$ cat > /tmp/file2
A
C sdflmsdflmsdfsklmdfksdmfksd fmsdflmdflskdf dfldksdlfkdlfkdlkf
E
$ paste -d '\n' /tmp/file1 /tmp/file2
A
A
C %FORGOT% fmsdflmdflskdf dfldksdlfkdlfkdlkf
C sdflmsdflmsdfsklmdfksdmfksd fmsdflmdflskdf dfldksdlfkdlfkdlkf
E
E
$ paste -d '\n' /tmp/file1 /tmp/file2 | awk '
NR%2 { linefirstfile=$0 ; }
!(NR%2) { if ( $0 != linefirstfile )
{ print "line",NR/2,": "; print linefirstfile ; print $0 ; } }'
line 2 :
C %FORGOT% fmsdflmdflskdf dfldksdlfkdlfkdlkf
C sdflmsdflmsdfsklmdfksdmfksd fmsdflmdflskdf dfldksdlfkdlfkdlkf
If it happens that the files don't have the same amount of lines, then you can add first a check of the number of line, comparing $(wc -l /tmp/file1) and $(wc -l /tmp/file2) , and only do the past...|awk if they have the same amount of line, to ensure the "paste" works correctly by always having one line of each! (But of course, in that case, there will be one (fast!) entire read of each file...)
You can easily adjust it to display exactly as you need it to. And you could quit after the Nth difference (either automatically, with a counter in the awk loop, or by pressing CTRL-C when you saw enough)
Which versions of diff have you tried? GNU diff has a "--speed-large-files" which may help.
The comm tool assumes the lines are sorted.
Related
I am using grep in a while loop to find lines from one file in another file and saving the output to a new file. My file is quite large (226 million lines) and the script is taking forever (12 days and counting). Do you have a suggestion to speed it up, perhaps there is a better way rather than grep?
(I also need the preceding line for the output, therefore grep -B 1.)
Here is my code:
#!/bin/bash
while IFS= read -r line; do
grep -B 1 $line K33.21mercounts.bf.trimmedreads.dumps.fa >> 21mercounts.bf.trimmedreads.diff.kmers.K33;
done <21mercounts.bf.trimmedreads.diff.kmers
Update:
The input file with the lines to look for is 4.7 GB and 226 mio lines and looks like this:
AAAGAAAAAAAAAGCTAAAAT
ATCTCGACGCTCATCTCAGCA
GTTCGTCGGAGAGGAGAGAAC
GAGGACTATAAAATTGTCGCA
GGCTTCAATAATTTGTATAAC
GACATAGAATCACGAGTGACC
TGGTGAGTGACATCCTTGACA
ATGAAAACTGCCAGCAAACTC
AAAAAACTTACCTTAAAAAGT
TTAGTACACAATATCTCCCAA
The file to look in is 26 GB and 2 billion lines and looks like this:
>264638
AAAAAAAAAAAAAAAAAAAAA
>1
AAAGAAAAAAAAAGCTAAAAT
>1
ATCTCGACGCTCATCTCAGCA
>1
GTTCGTCGGAGAGGAGAGAAC
>28
TCTTTTCAGGAGTAATAACAA
>13
AATCATTTTCCGCTGGAGAGA
>38
ATTCAATAAATAATAAATTAA
>2
GAGGACTATAAAATTGTCGCA
>1
GGCTTCAATAATTTGTATAAC
The expected output would be this:
>1
AAAGAAAAAAAAAGCTAAAAT
>1
ATCTCGACGCTCATCTCAGCA
>1
GTTCGTCGGAGAGGAGAGAAC
>2
GAGGACTATAAAATTGTCGCA
>1
GGCTTCAATAATTTGTATAAC
You can try this grep -f command without shell loop and using a fixed string search:
grep -B1 -Ff 21mercounts.bf.trimmedreads.diff.kmers \
K33.21mercounts.bf.trimmedreads.dumps.fa > 21mercounts.bf.trimmedreads.diff.kmers.K33
There are quite a few tools (e.g. ripgrep) and options (-f, -F, and -x) to speed up your basic approach. But all of them are basically the same slow approach as you are using now, "only" sped up by a huge but still constant factor. For your problem and input sizes, I'd recommend to change the approach altogether. There are many different ways to tackle your problem. First, lets define some variables to estimate the speedup of those approaches:
Problem
A 26 GB haystack file with h = 1 million entries (description, sequence) = 2 billion lines, e.g.
>28
TCTTTTCAGGAGTAATAACAA
>13
AATCATTTTCCGCTGGAGAGA
>38
ATTCAATAAATAATAAATTAA
...
A 4.7 GB needles file with n = 226 million lines, each of length m = 21, e.g.
GACATAGAATCACGAGTGACC
TGGTGAGTGACATCCTTGACA
ATGAAAACTGCCAGCAAACTC
...
For all needles, we want to extract the corresponding entries in the haystack (if they exist).
Solutions
We assume n < h and a constant m. Therefore O(n+h) = O(h), O(m)=O(1) and so on.
Our goal is to minimize the number of times we have to iterate the the biggest file (= the haystack).
Naive – O(h·n) time
Currently, you are using the naive approach. For each needle, the entire haystack is searched once.
Put needles into data structure; search haystack once – O(h) time
Store all needles in a data structure which has a fast contains() operation.
Then iterate the haystack and call needles.contains(haystackEntry) for each entry, to decide whether it is something you are searching for.
Currently, your "data structure" is a list, which takes O(1) time to "build" (because it is already in that form) , but O(n) time to query once!
Below data structures take O(n) time to populate and O(1) time to query once, resulting in O(n + h·1) = O(h) time in total.
Tries (= prefix trees) can be expressed as a regexes, so you can stick with grep. E.g. the needles ABC, ABX, and XBC can be stored in the Trie regex ^(AB(C|X)|XBC). But converting the list of needles to such a Trie regex is a bit complicated in bash.
Hash maps are available in awk, see sundeep's answer. But putting 4.7 GB of raw data in such a structure in memory is probably not very efficient if even possible (depends on your available memory. The hash map needs to be many times bigger than the raw data).
Either way, data structures and bash don't mix very well. And even if we switched to a better language, we would have to re-build or store/load the structure each time the program runs. Therefore it is easier and nearly as efficient to ...
Sort everything; search haystack once – O( h·log(h) + h ) time
First sort the haystack and the needles, then iterate the haystack only once.
Take the first needle and search the haystack from the beginning. When reaching a haystack entry that would have to be sorted behind the the current needle, take the next needle and continue the search from your current location.
This can be done easily in bash. Here we use GNU coreutils to make processing a bit easier, faster, and safer:
export LC_ALL=C # speeds up sorting
mem=66% # Max. memory to be used while sorting. More is better.
sep=$'\f' # A character not appearing in your data.
paste -d"$sep" - - < haystack > haystack2
sort -S"$mem" -o needles2 needles
sort -S"$mem" -t"$sep" -k2,2 -o haystack2 haystack2
# --nocheck-order is not needed, but speeds up the process
join -t"$sep" -22 -o2.1,2.2 --nocheck-order needles2 haystack2 |
tr "$sep" \\n
This changes the order of the output. If you need the output in the original order, use a Schwartzian transform (= decorate-sort-undecorate): Before sorting the needles/haystack, store their line numbers. Drag those along through the entire process. At the end, sort the found entries by those line numbers. Finally, remove the line numbers and print the result.
export LC_ALL=C # speeds up sorting
mem=66% # Max. memory to be used while sorting. More is better.
sep=$'\f' # A character not appearing in your data.
nl -ba -d '' -s"$sep" needles > needles2
paste -d"$sep" - - < haystack | nl -ba -d '' -s"$sep" > haystack2
sort -t"$sep" -k2,2 -S"$mem" -o needles2 needles2
sort -t"$sep" -k3,3 -S"$mem" -o haystack2 haystack2
# --nocheck-order is not needed, but speeds up the process
join -t"$sep" -12 -23 -o1.1,2.1,2.2,2.3 --nocheck-order needles2 haystack2 > result
sort -t"$sep" -k1,2n -S"$mem" -o result result
cut -d"$sep" -f3- result | tr "$sep" \\n
If preserving the original order is not required, using GNU uniq and GNU sed:
{ cat 21mercounts.bf.trimmedreads.diff.kmers
sed -n 'x;n;G;s/\n//p' K33.21mercounts.bf.trimmedreads.dumps.fa
} | LC_ALL=C sort | uniq -w21 -D |
sed -n 's/\(.*\)>\(.*\)/>\2\n\1/p' > 21mercounts.bf.trimmedreads.diff.kmers.K33
Here's a solution using awk. Not sure if it will be faster than grep or ripgrep, but it is possible due to hash-based lookup. This assumes your RAM is big enough to load the first file (4.7 GB and 226 mio lines).
$ awk 'NR==FNR{a[$1]; next} $0 in a{print p; print} {p=$0}' f1 f2
>1
AAAGAAAAAAAAAGCTAAAAT
>1
ATCTCGACGCTCATCTCAGCA
>1
GTTCGTCGGAGAGGAGAGAAC
>2
GAGGACTATAAAATTGTCGCA
>1
GGCTTCAATAATTTGTATAAC
mawk is usually the fastest option, but I have come across examples where gawk is faster, especially for arrays like in this command. If you can install frawk, that can give you even faster results. Command needs to be slightly modified:
frawk 'NR==FNR{a[$1]; next} $0 in a{print p; print $0} {p=$0}' f1 f2
Any time I deal with files this big, I almost always end up sorting them. Sorts are slow, but take a lot less time that your while read loop that is scanning 2 billion lines 226 million times.
sort 4GB>4gb.srt
and
sed '/>/{N;s/\n/ /}' 26GB |sort -t' ' -k2 >25gb.srt
which will produce a file like this:
>264638 AAAAAAAAAAAAAAAAAAAAA
>1 AAAGAAAAAAAAAGCTAAAAT
>13 AATCATTTTCCGCTGGAGAGA
>1 ATCTCGACGCTCATCTCAGCA
>38 ATTCAATAAATAATAAATTAA
>2 GAGGACTATAAAATTGTCGCA
>1 GGCTTCAATAATTTGTATAAC
>1 GTTCGTCGGAGAGGAGAGAAC
>28 TCTTTTCAGGAGTAATAACAA
Now you only have to read through each file once.
$ cat tst
awk 'BEGIN{ getline key < "4gb.srt"; }
$2 < key { next; }
$2 > key { while ($2 > key){ getline key < "4gb.srt"; } }
$2 == key { $0=gensub(/ /,"\n",1); print }' 25gb.srt
$ ./tst
>1
AAAGAAAAAAAAAGCTAAAAT
>1
ATCTCGACGCTCATCTCAGCA
>2
GAGGACTATAAAATTGTCGCA
>1
GGCTTCAATAATTTGTATAAC
>1
GTTCGTCGGAGAGGAGAGAAC
The ordering is different from yours, but otherwise does that work?
(Try some tests with smaller files first...)
addendum
Please c.f. Socowi's better implementation, but I was asked to explain the awk, so -
First, see above where I parsed the larger "haystraw" file to single lines sorted on the key field, which will be $2 in awk, and the smaller "needles" file in the same order. Making a few (not necessarily safe) assumptions, I ran through both once.
BEGIN{ getline key < "4gb.srt"; }
This just initializes the first "needle" into a variable called key by reading from the appropriate file.
Then as awk reads each line of the "haystraw" file, it automatically parses it into the fields - since we stacked them, the first field is the previous line of the original haystack, and the second field is the value to check, so we do our comparisons between key and $2.
$2 < key { next; } # skip ahead to next key/needle
If the current straw is less than the needle, throw it away and grab the next one.
$2 > key { while ($2 > key){ getline key < "4gb.srt"; } }
If the current straw is greater than the needle, then the needle wasn't in the file. The next one might not be either, so we grab needles in sequential order and compare then until they catch up.
There's actually a potential bug here - it's not confirming that something was read and could hang in an endless loop when the needles run out. This section should have been something like -
$2 > key { while ( ($2 > key) { if( 0 == getline key < "4gb.srt" ) key = "ZZZZZZZZZZZZZZZZZZZZZZ"; } }
Finally,
$2 == key { $0=gensub(/ /,"\n",1); print }' 25gb.srt
If they match, reinsert the newline between the previous record and the matching line, and print them both.
There really should also have been an END{ close("4gb.srt") } as well.
grep can search for many patterns (given in a separate file) simultaneously, so reading K33.21mercounts.bf.trimmedreads.dumps.fa will only be done once.
Something like the following might work:
#!/bin/bash
grep --f 21mercounts.bf.trimmedreads.diff.kmers -B 1 K33.21mercounts.bf.trimmedreads.dumps.fa >> 21mercounts.bf.trimmedreads.diff.kmers.K33;
However, it probably requires lots of RAM
I have a file named "compare" and a file named "final_contigs_c10K.fa"
I want to eleminate lines AND THE NEXT LINE from "final_contigs_c10K.fa" containing specific strings in "compare".
compare looks like this :
k119_1
k119_3
...
and the number of lines of compare is 26364.
final_contigs_c10K.fa looks like :
>k119_1
AAAACCCCC
>k119_2
CCCCC
>k119_3
AAAAAAAA
...
I want to make make final_contigs_c10K.fa into a format :
>k119_1
AAAACCCCC
>k119_3
AAAAAAAA
...
I tried this code, but this code takes too much time, though it seems to be working fine. I think it takes too much time because the number of lines in compare is 26364, which is too much compared to my other files that I had tested the code on.
while read line; do sed -i -e "/$line/ { N; d; }" final_contigs_c10K.fa; done < compare
Is there a way to make this command faster?
Using awk
$ awk 'NR==FNR{a[">" $1];next}$1 in a{p=3} --p>0' compare final_contigs_c10K.fa
>k119_1
AAAACCCCC
>k119_3
AAAAAAAA
This will produce the output to stdout ie. won't make any changes to original files.
Explained:
$ awk '
NR==FNR { # process the first file
a[">" $1] # hash to a, adding > while at it
next # process the next record
} # process th second file after this point
$1 in a { p=3 } # if current record was in compare file set p
--p>0 # print current file match and the next record
' compare final_contigs_c10K.fa # mind the file order
I have a rather large file. What is common to all is the hostname to break each section example :
HOSTNAME:host1
data 1
data here
data 2
text here
section 1
text here
part 4
data here
comm = 2
HOSTNAME:host-2
data 1
data here
data 2
text here
section 1
text here
part 4
data here
comm = 1
The above prints
As you see above, in between each section there are other sections broken down by key words or lines that have specific values
I like to use a oneliner to print host name for each section and then print which ever lines I want to extract under each hostname section
Can you please help. I am using now grep -C 10 HOSTNAME | gerp -C pattern
but this assumes that there are 10 lines in each section. This is not an optimal way to do this; can someone show a better way. I also need to be able to print more than one line under each pattern that I find . So if I find data1 and there are additional lines under it I like to grab and print them
So output of command would be like
grep -C 10 HOSTNAME | grep data 1
grep -C 10 HOSTNAME | grep -A 2 data 1
HOSTNAME:Host1
data 1
HOSTNAME:Hoss2
data 1
Beside Grep I use this sed command to print my output
sed -r '/HOSTNAME|shared/!d' filename
The only problem with this sed command is that it only prints the lines that have patterns shared & HOSTNAME in them. I also need to specify the number of lines I like to print in my case under the line that matched patterns shared. So I like to print HOSTNAME and give the number of lines I like to print under second search pattern shared.
Thanks
awk to the rescue!
$ awk -v lines=2 '/HOSTNAME/{c=lines} NF&&c&&c--' file
HOSTNAME:host1
data 1
HOSTNAME:host-2
data 1
print lines number of lines including pattern match, skips empty lines.
If you want to specify secondary keyword instead number of lines
$ awk -v key='data 1' '/HOSTNAME/{h=1; print} h&&$0~key{print; h=0}' file
HOSTNAME:host1
data 1
HOSTNAME:host-2
data 1
Here is a sed twoliner:
sed -n -r '/HOSTNAME/ { p }
/^\s+data 1/ {p }' hostnames.txt
It prints (p)
when the line contains a HOSTNAME
when the line starts with some whitespace (\s+) followed by your search criterion (data 1)
non-mathing lines are not printed (due to the sed -n option)
Edit: Some remarks:
this was tested with GNU sed 4.2.2 under linux
you dont need the -r if your sed version does not support it, replace the second pattern to /^.*data 1/
we can squash everything in one line with ;
Putting it all together, here is a revised version in one line, without the need for the extended regex ( i.e without -r):
sed -n '/HOSTNAME/ { p } ; /^.*data 1/ {p }' hostnames.txt
The OP requirements seem to be very unclear, but the following is consistent with one interpretation of what has been requested, and more importantly, the program has no special requirements, and the code can easily be modified to meet a variety of requirements. In particular, both search patterns (the HOSTNAME pattern and the "data 1" pattern) can easily be parameterized.
The main idea is to print all lines in a specified subsection, or at least a certain number up to some limit.
If there is a limit on how many lines in a subsection should be printed, specify a value for limit, otherwise set it to 0.
awk -v limit=0 '
/^HOSTNAME:/ { subheader=0; hostname=1; print; next}
/^ *data 1/ { subheader=1; print; next }
/^ *data / { subheader=0; next }
subheader && (limit==0 || (subheader++ < limit)) { print }'
Given the lines provided in the question, the output would be:
HOSTNAME:host1
data 1
HOSTNAME:host-2
data 1
(Yes, I know the variable 'hostname' in the awk program is currently unused, but I included it to make it easy to add a test to satisfy certain obvious requirements regarding the preconditions for identifying a subheader.)
sed -n -e '/hostname/,+p' -e '/Duplex/,+p'
The simplest way to do it is to combine two sed commands ..
I have a huge set of files, 64,000, and I want to create a Bash script that lists the name of files using
ls -1 > file.txt
for every 4,000 files and store the resulted file.txt in a separate folder. So, every 4000 files have their names listed in a text files that is stored in a folder. The result is
folder01 contains file.txt that lists files #0-#4000
folder02 contains file.txt that lists files #4001-#8000
folder03 contains file.txt that lists files #8001-#12000
.
.
.
folder16 contains file.txt that lists files #60000-#64000
Thank you very much in advance
You can try
ls -1 | awk '
{
if (! ((NR-1)%4000)) {
if (j) close(fnn)
fn=sprintf("folder%02d",++j)
system("mkdir "fn)
fnn=fn"/file.txt"
}
print >> fnn
}'
Explanation:
NR is the current record number in awk, that is: the current line number.
NR starts at 1, on the first line, so we subtract 1 such that the if statement is true for the first line
system calls an operating system function from within awk
print in itself prints the current line to standard output, we can redirect (and append) the output to the file using >>
All uninitialized variables in awk will have a zero value, so we do not need to say j=0 in the beginning of the program
This will get you pretty close;
ls -1 | split -l 4000 -d - folder
Run the result of ls through split, breaking every 4000 lines (-l 4000), using numeric suffixes (-d), from standard input (-) and start the naming of the files with folder.
Results in folder00, folder01, ...
Here an exact solution using awk:
ls -1 | awk '
(NR-1) % 4000 == 0 {
dir = sprintf("folder%02d", ++nr)
system("mkdir -p " dir);
}
{ print >> dir "/file.txt"} '
There are already some good answers above, but I would also suggest you take a look at the watch command. This will re-run a command every n seconds, so you can, well, watch the output.
I have two big files with a lot of text, and what I have to do is keep all lines in file A that have a field that matches a field in file B.
file A is something like:
Name (tab) # (tab) # (tab) KEYFIELD (tab) Other fields
file B I managed to use cut and sed and other things to basically get it down to one field that is a list.
So The goal is to keep all lines in file A in the 4th field (it says KEYFIELD) if the field for that line matches one of the lines in file B. (Does NOT have to be an exact match, so if file B had Blah and file A said Blah_blah, it'd be ok)
I tried to do:
grep -f fileBcutdown fileA > outputfile
EDIT: Ok I give up. I just force killed it.
Is there a better way to do this? File A is 13.7MB and file B after cutting it down is 32.6MB for anyone that cares.
EDIT: This is an example line in file A:
chr21 33025905 33031813 ENST00000449339.1 0 - 33031813 33031813 0 3 1835,294,104, 0,4341,5804,
example line from file B cut down:
ENST00000111111
Here's one way using GNU awk. Run like:
awk -f script.awk fileB.txt fileA.txt
Contents of script.awk:
FNR==NR {
array[$0]++
next
}
{
line = $4
sub(/\.[0-9]+$/, "", line)
if (line in array) {
print
}
}
Alternatively, here's the one-liner:
awk 'FNR==NR { array[$0]++; next } { line = $4; sub(/\.[0-9]+$/, "", line); if (line in array) print }' fileB.txt fileA.txt
GNU awk can also perform the pre-processing of fileB.txt that you described using cut and sed. If you would like me to build this into the above script, you will need to provide an example of what this line looks like.
UPDATE using files HumanGenCodeV12 and GenBasicV12:
Run like:
awk -f script.awk HumanGenCodeV12 GenBasicV12 > output.txt
Contents of script.awk:
FNR==NR {
gsub(/[^[:alnum:]]/,"",$12)
array[$12]++
next
}
{
line = $4
sub(/\.[0-9]+$/, "", line)
if (line in array) {
print
}
}
This successfully prints lines in GenBasicV12 that can be found in HumanGenCodeV12. The output file (output.txt) contains 65340 lines. The script takes less than 10 seconds to complete.
You're hitting the limit of using the basic shell tools. Assuming about 40 characters per line, File A has 400,000 lines in it and File B has about 1,200,000 lines in it. You're basically running grep for each line in File A and having grep plow through 1,200,000 lines with each execution. that's 480 BILLION lines you're parsing through. Unix tools are surprisingly quick, but even something fast done 480 billion times will add up.
You would be better off using a full programming scripting language like Perl or Python. You put all lines in File B in a hash. You take each line in File A, check to see if that fourth field matches something in the hash.
Reading in a few hundred thousand lines? Creating a 10,000,000 entry hash? Perl can parse both of those in a matter of minutes.
Something -- off the top of my head. You didn't give us much in the way of spects, so I didn't do any testing:
#! /usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw(say);
# Create your index
open my $file_b, "<", "file_b.txt";
my %index;
while (my $line = <$file_b>) {
chomp $line;
$index{$line} = $line; #Or however you do it...
}
close $file_b;
#
# Now check against file_a.txt
#
open my $file_a, "<", "file_a.txt";
while (my $line = <$file_a>) {
chomp $line;
my #fields = split /\s+/, $line;
if (exists $index{$field[3]}) {
say "Line: $line";
}
}
close $file_a;
The hash means you only have to read through file_b once instead of 400,000 times. Start the program, go grab a cup of coffee from the office kitchen. (Yum! non-dairy creamer!) By the time you get back to your desk, it'll be done.
grep -f seems to be very slow even for medium sized pattern files (< 1MB). I guess it tries every pattern for each line in the input stream.
A solution, which was faster for me, was to use a while loop. This assumes that fileA is reasonably small (it is the smaller one in your example), so iterating multiple times over the smaller file is preferable over iterating the larger file multiple times.
while read line; do
grep -F "$line" fileA
done < fileBcutdown > outputfile
Note that this loop will output a line several times if it matches multiple patterns. To work around this limitation use sort -u, but this might be slower by quite a bit. You have to try.
while read line; do
grep -F "$line" fileA
done < fileBcutdown | sort -u | outputfile
If you depend on the order of the lines, then I don't think you have any other option than using grep -f. But basically it boils down to trying m*n pattern matches.
use the below command:
awk 'FNR==NR{a[$0];next}($4 in a)' <your filtered fileB with single field> fileA