Linux - Delete lines from file 1 in file 2 BIG DATA - linux
have two files:
file1:
a
b
c
d
file2:
a
b
f
c
d
e
output file (file2) should be:
f
e
I want that the lines of file1 should be deleted directly in file2. I want that the output should be not a new file. It should direct deleted in file 2. Of course there can be created a temp file.
I real file two contains more than 300.000 lines. That is the reason why some solution:
comm -13 file1 file2
don't work.
comm needs the input files to be sorted. You can use process substitution for that:
#!/bin/bash
comm -13 <(sort file1) <(sort file2) > tmp_file
mv tmp_file > original_file
Output:
e
f
Alternatively, if you have enough memory, you can use the following awk command which does not need the input to be sorted:
awk 'NR==FNR{a[$0];next} !($0 in a)' file1 file2
Output (preserved sort order):
f
e
Keep in mind that the size of the array a directly depends on the size of file1.
PS: grep -vFf file1 file2 can also be used and the memory requirements are the same as for the awk solution. Given that, I would probably just use grep.
Related
match 1,2,5 columns of file1 with 1,2,3 columns of file2 respectively and output should have matched rows from file 2. second file is zipped file .gz
file1 3 1234581 A C rs123456 file2 zipped file .gz 1 1256781 rs987656 T C 3 1234581 rs123456 A C 22 1792471 rs928376 G T output 3 1234581 rs123456 A C I tried zcat file2.gz | awk 'NR==FNR{a[$1,$2,$5]++;next} a[$1,$2,$3]' file1.txt - > output.txt but it is not working
Please try following awk code for your shown samples. Use zcat to read your .gz file and then pass it as 2nd input to awk program for reading, after its done reading with file1. zcat your_file.gz | awk 'FNR==NR{arr[$1,$2,$5];next} (($1,$2,$3) in arr)' file1 - Fixes in OP's attempt: You need not to increment value of array while creating it in file1. Just existence of indexes in it will be enough. While checking condition in reading file2(passed by zcat command) just check if respective fields are present in array if yes then print that line.
Get a line from a file and add as a column into another file
I have two files. File A: Unix File B: A,B C,D E,f .,. .,. N,N Expected Output: A,B,Unix C,D,Unix E,f,Unix .,.,Unix .,.,Unix N,N,Unix How it is possible with in Linux shell script?
➜ cat a A,B C,D E,f .,. .,. N,N ➜ cat f2 Unix ➜ awk 'BEGIN{getline f2<"f2"}; {printf("%s,%s\n",$0,f2);}' a A,B,Unix C,D,Unix E,f,Unix .,.,Unix .,.,Unix N,N,Unix
Assuming fileA contains only 1 word, it's better to pass it to awk as a parameter. awk -v v="Unix" 'BEGIN{FS=OFS=","}{$(NF+1)=v}1' fileB If this fileA contains more words, and assuming there is only 1 per line, you could also use this: awk 'BEGIN{FS=OFS=","}NR==FNR{a[++i]=$1;next} {for(j=1; j<=i; j++) $(NF+1)=a[j]}1' fileA fileB
And then there is good old paste as an option: $ cat file1 UNIX $ cat file2 A,B C,D E,F $ paste -d',' file2 <(yes `cat file1` | head -n $(cat file2 | wc -l)) A,B,UNIX C,D,UNIX E,F,UNIX The tricky part here is that the number of rows differ in file1 and file2, so we need to repeat the UNIX row of file1 as often as necessary (i.e, as many times as there are rows in file2) to be able to use paste.
why does grep produce this behavior with these files
I have two files in this format, although they have the same amount of columns, the actual files have significantly more rows. file1 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 file2 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 Both files have the same amount of columns and rows, and are formatted exactly the same, to my knowledge. None of the files have an extension, so no .csv or .txt it's simply the file name. I wanted to compare each row in the files and output the count of the matching rows. So look at row 1 in file1 and then row 1 in file2 and if the match that gets a one and so forth. To do this I've done grep -c -f file1 file2 in the past however, this time when I try to do it, grep never stops running, and it takes up an entire core. Now when I do grep -w -c -f file1 file2 I do get back a count of the matching lines in the files, however what is strange is if I change the order so if I do grep -w -c -f file2 file1 I get back a different number. The actual files each have 2983 rows and 61 columns. For those files if I do grep -w -c -f file1 file2 I get back 2021 but if I do grep -w -c -f file2 file1 I get back 1950 Also if I do diff file1 file2 I get this output at the end of the other output \ No newline at end of file but if I do diff -w file1 file2 the no newline message is no longer returned.
If you want to find the common lines, use comm or grep -Fx comm -12 file1 file2 grep -Fx -f file1 file2 You comment: "it worked the way I want it to. by adding a column with numbers that correspond to the line number" So, you only want to match if line number N in file1 is the same as line number N in file2: awk 'NR==FNR {line[FNR] = $0; next} $0 == line[FNR]' file1 file2 or with bash (this will likely be slower, but consume less memory): while IFS= read -u3 -r line1; IFS= read -u4 -r line2; do [ "$line1" = "$line2" ] && echo "$line1" done 3<file1 4<file2
There is an excellent tool called diff designed for exactly this purpose: diff -c file1 file2 #Difference comm -1 -2 file1 file2 #Common This will tell you matching lines, and also exactly specify where lines differ.
Extract lines from File2 already found File1
Using linux commandline, i need to output the lines from text file2 that are already found in file1. File1: C A G E B D H F File2: N I H J K M D L A Output: A D H Thanks!
You are looking for the tools 'grep' Check this out. Lets say you have inputs in file1 & file2 files grep -f file1 file2 will return you H D A
A more flexible tool to use would be awk awk 'NR==FNR{lines[$0]++; next} $1 in lines' Example $ awk 'NR==FNR{lines[$0]++; next} $1 in lines' file1 file2 H D A What it does? NR==FNR{lines[$0]++; next} NR==FNR checks if the file number of records is equal to the overall number of records. This is true only for the first file, file1 lines[$0]++ Here we create an associative array with the line, $0 in file 1 as index. $0 in lines This line works only for the second file because of the next in previous action. This checks if the line in file 2 is there in the saved array lines, if yes the default action of printing the entire line is taken Awk is more flexible than the grep as you can columns in file1 with any column in file 2 and decides to print any column rather than printing the entire line
This is what the comm utility does, but you have to sort the files first: To get the lines in common between the 2 files: comm -12 <(sort File1) <(sort File2)
extracting unique values between 2 sets/files
Working in linux/shell env, how can I accomplish the following: text file 1 contains: 1 2 3 4 5 text file 2 contains: 6 7 1 2 3 4 I need to extract the entries in file 2 which are not in file 1. So '6' and '7' in this example. How do I do this from the command line? many thanks!
$ awk 'FNR==NR {a[$0]++; next} !($0 in a)' file1 file2 6 7 Explanation of how the code works: If we're working on file1, track each line of text we see. If we're working on file2, and have not seen the line text, then print it. Explanation of details: FNR is the current file's record number NR is the current overall record number from all input files FNR==NR is true only when we are reading file1 $0 is the current line of text a[$0] is a hash with the key set to the current line of text a[$0]++ tracks that we've seen the current line of text !($0 in a) is true only when we have not seen the line text Print the line of text if the above pattern returns true, this is the default awk behavior when no explicit action is given
Using some lesser-known utilities: sort file1 > file1.sorted sort file2 > file2.sorted comm -1 -3 file1.sorted file2.sorted This will output duplicates, so if there is 1 3 in file1, but 2 in file2, this will still output 1 3. If this is not what you want, pipe the output from sort through uniq before writing it to a file: sort file1 | uniq > file1.sorted sort file2 | uniq > file2.sorted comm -1 -3 file1.sorted file2.sorted There are lots of utilities in the GNU coreutils package that allow for all sorts of text manipulations.
I was wondering which of the following solutions was the "fastest" for "larger" files: awk 'FNR==NR{a[$0]++}FNR!=NR && !a[$0]{print}' file1 file2 # awk1 by SiegeX awk 'FNR==NR{a[$0]++;next}!($0 in a)' file1 file2 # awk2 by ghostdog74 comm -13 <(sort file1) <(sort file2) join -v 2 <(sort file1) <(sort file2) grep -v -F -x -f file1 file2 Results of my benchmarks in short: Do not use grep -Fxf, it's much slower (2-4 times in my tests). comm is slightly faster than join. If file1 and file2 are already sorted, comm and join are much faster than awk1 + awk2. (Of course, they do not assume sorted files.) awk1 + awk2, supposedly, use more RAM and less CPU. Real run times are lower for comm probably due to the fact that it uses more threads. CPU times are lower for awk1 + awk2. For the sake of brevity I omit full details. However, I assume that anyone interested can contact me or just repeat the tests. Roughly, the setup was # Debian Squeeze, Bash 4.1.5, LC_ALL=C, slow 4 core CPU $ wc file1 file2 321599 321599 8098710 file1 321603 321603 8098794 file2 Typical results of fastest runs awk2: real 0m1.145s user 0m1.088s sys 0m0.056s user+sys 1.144 awk1: real 0m1.369s user 0m1.324s sys 0m0.044s user+sys 1.368 comm: real 0m0.980s user 0m1.608s sys 0m0.184s user+sys 1.792 join: real 0m1.080s user 0m1.756s sys 0m0.140s user+sys 1.896 grep: real 0m4.005s user 0m3.844s sys 0m0.160s user+sys 4.004 BTW, for the awkies: It seems that a[$0]=1 is faster than a[$0]++, and (!($0 in a)) is faster than (!a[$0]). So, for an awk solution I suggest: awk 'FNR==NR{a[$0]=1;next}!($0 in a)' file1 file2
How about: diff file_1 file_2 | grep '^>' | cut -c 3- This would print the entries in file_2 which are not in file_1. For the opposite result one just has to replace '>' with '<'. 'cut' removes the first two characters added by 'diff', that are not part of the original content. The files don't even need to be sorted.
with grep: grep -F -x -v -f file_1 file_2
here's another awk solution $ awk 'FNR==NR{a[$0]++;next}(!($0 in a))' file1 file2 6 7
$ cat file1 file1 file2 | sort | uniq -u 6 7 uniq -- report or filter out repeated lines in a file ... Repeated lines in the input will not be detected if they are not adjacent, so it may be necessary to sort the files first. -u Only output lines that are not repeated in the input. Print file1 twice to make sure all entries from file1 are skipped by uniq -u .
cat file1 file2 | sort -u > unique
If you are really set on doing this from the command line, this site (search for "no duplicates found") has an awk example that searches for duplicates. It may be a good starting point to look at that. However, I'd encourage you to use Perl or Python for this. Basically, the flow of the program would be: findUniqueValues(file1, file2){ contents1 = array of values from file1 contents2 = array of values from file2 foreach(value2 in contents2){ found=false foreach(value1 in contents1){ if (value2 == value1) found=true } if(!found) print value2 } } This isn't the most elegant way of doing this, since it has a O(n^2) time complexity, but it will do the job.