Compare two files and store differences using conditional - linux
I managed to find half of the solution to my challenge, but I cannot find a way to add a conditional to deal with the other half. I am using awk. The field separator is ; and the values are inside double-quotes ". The files have only 3x fields each.
I have two files (file1.txt file2.txt) and want to store the differences in a third file(results.txt).
file1.txt
"SWITCH1";"rack7";"Datacenter1"
"SWTICH46";"rack1";"rack1"
"ROUTER3";"";"rack1"
"SWITCH7";"rack1";"rack1"
"ROUTER9";"rack1";"rack1"
"ROUTER22";"rack1";"Datacenter4"
file2.txt
"SWITCH1";"rack7";"Datacenter1"
"ROUTER22";";"Datacenter4"
"SWITCH51";"rack7";"Datacenter2"
If I use:
awk -F';' 'FNR==NR {a[$0];next} !($0 in a)' file1.txt file2.txt
I get:
"ROUTER22";";"Datacenter4"
"SWITCH51";"rack7";"Datacenter2"
But I want to treat $2 in file2.txt " and $2 in file1.txt rack1 not as a difference between files. Therefore whenever I find an entry on file2.txt that has " in field $2 and rack1 in field $2 in file1.txt for the same $1, I do not want to treat as difference and discard it.
The file is generated dynamically nightly and when it happens; field $2==rack1 in file1.txt while field $2==" in file2.txt. This is the match to exclude as well as the one I managed to exclude with the awk command above. Below is the expected output:
Desired results.txt
"SWITCH51";"rack7";"Datacenter2"
I am struggling to find a conditional to handle this scenario.
You could store the original lines in array a, like you do, plus modified lines where "rack1" is replaced by ":
$ awk -F';' -vOFS=';' 'FNR==NR {a[$0]; if($2=="\"rack1\"") {$2="\"";a[$0]}; next}
!($0 in a)' file1.txt file2.txt
"SWITCH51";"rack7";"Datacenter2"
Note the specification of the OFS output field separator. It is needed because when we modify the $2 field awk reconstructs $0 using the OFS which by default is a space while we need it to remain a semi-column for correct comparison when parsing file2.txt.
You could check if the value of field 2 is just " and replace it with "rack1"
If after the replacement $0 is not in array a then print the unmodified row which is the tmp variable in the example.
awk '
BEGIN{FS=OFS=";"}
FNR==NR {a[$0];next}
{
tmp = $0
sub(/^"$/, "\"rack1\"", $2)
if (!($0 in a)) print tmp
}
' file1.txt file2.txt
Output
"SWITCH51";"rack7";"Datacenter2"
Based on your shown samples, please try following awk code. Simple explanation would be, in first Input_file's reading creating 2 arrays a and b with index of $0 and $1,$3 respectively. In next Input_file's reading checking 2 conditions if $1,$3 is NOT present in b AND $0 is not present in a then print that line from Input_file2.
awk -F';' '
FNR==NR{
a[$0]
b[$1,$3]
next
}
!(($1,$3) in b) && !($0 in a)
' file1.txt file2.txt
awk -F';' '
NR==FNR { a[$0]; next }
{ key = $1 FS ($2 == "\"" ? "\"rack1\"" : $2) FS $3 }
!(key in a)
' file1.txt file2.txt
"SWITCH51";"rack7";"Datacenter2"
Related
Search array index with double quotes string using awk
File1: "1"|data|er "2"|text|rq ""|test2|req "3"|test4|teq File2: 1 2 3 Expected Output should be (file3.txt) "1"|data|er "2"|text|rq "3"|test4|teq awk -F''$Delimeter'' '{print $1}' file1.txt | awk '{gsub(/"/, "", $1); print $1}' | awk 'NF && !seen[$1]++' | sort -n > file2.txt I am able to extract the ids 1,2,3 from file1 and removed the double quotes and written into the file2 but i need to search these 1,2,3 ids in my file1.txt("1","2","3"), problem is search not recognizing due to dobule qoutes in the file awk 'BEGIN {FS=OFS="|"} NR==FNR{a[$1]; next} \"$1\" in a' file2.txt file1.txt > file3.txt
Could you please try following. awk -v s1='"' ' FNR==NR{ val=s1 $0 s1 a[val] next } ($1 in a) ' Input_file2 FS='|' Input_file1 Explanation: Adding detailed explanation for above code. awk -v s1='"' ' ##Starting awk program from here and creating variable s1 whose value is ". FNR==NR{ ##Checking condition FNR==NR which will be TRUE when first Input_file named Input_file2 is being read. val=s1 $0 s1 ##Creating variable val whose value is s1 current line value and s1 here. a[val] ##Creating an array named a whose index is variable val. next ##next will skip all further statements from here. } ##Closing FNR==NR BLOCK of this code here. ($1 in a) ##Checking condition if $1 of current line is present in array a then print that line of Input_file1. ' Input_file2 FS='|' Input_file1 ##Mentioning Input_file2 then setting FS as pipe and mentioning Input_file1 name here.
Let's say that your input is "1"|data|er "2"|text|rq ""|test2|req "3"|test4|teq And you want from these information 2 types of data : The ids The lines containing an id The easiest way to achieve this is, I think, to first get lines that has id, then from this, retrieve the ids. To do so : $ awk -F'|' '$0 ~ /"[0-9]+"/' input1 >input3; cat input3 "1"|data|er "2"|text|rq "3"|test4|teq $ sed 's/^"//; s/".*$//' input3 >input2; cat input2 1 2 3
extract data using sed or awk in linux
I am trying to merge data from 2 text files based on some condition. I have two files: 1.txt gera077||o||emi_riv_90#hotmail.com||||200.45.113.254||o||0f8caa3ced5dc172901a427410d20540 okan1993||||killa-o#hotmail.de||||84.141.125.140||o||69c1cb5ddbc66cceebe0dddba3eddf68 Tosiunia||||tosia_19#amorki.pl||o||83.22.193.86|||||ddcbba2076646980391cb4971b8030 DREP glen-666||o||glen-666#hotmail.com||||84.196.42.167||o||f139d8b49085d012af9048bb1cba3534 Page 1 Sheyes1 ||||summer_faerie_dustyrose#yahoo.com||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| . BenPhynix||||BenPhynix#aol.de||||| 62.226.181.57||||11dea24f1caebb012e11285579050f38 menopause |||totoche#wanadoo.fr||o||83.193.209.52||o||d7ca4d78fc79a795695ae1c161ce82ea jonof.|o||joflem#medi3.no||o||213.161.242.106||o||239f33743e4a070b728d4dcbd1091f1a 2.txt f139d8b49085d012af9048bb1cba3534: 12883 #: "# d7ca4d78fc79a795695ae1c161ce82ea: 123422 0f8caa3ced5dc172901a427410d20540 :: demo Contains the matching lines from 1.txt and hash is replaced with corresponding value in 2.txt result.txt gera077 || o || emi_riv_90#hotmail.com || or || 200.45.113.254 || o ||: demo glen-666-||glen-666#hotmail.com||||84.196.42.167||||12883 #: "# menopause |||totoche#wanadoo.fr||o||83.193.209.52||o||123422 Contains the non-matching lines from 1.txt left.txt okan1993||||killa-o#hotmail.de||||84.141.125.140||o||69c1cb5ddbc66cceebe0dddba3eddf68 Tosiunia||||tosia_19#amorki.pl||o||83.22.193.86|||||ddcbba2076646980391cb4971b8030 DREP Page 1 Sheyes1 ||||summer_faerie_dustyrose#yahoo.com||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| . BenPhynix||||BenPhynix#aol.de||||| 62.226.181.57||||11dea24f1caebb012e11285579050f38 jonof.|o||joflem#medi3.no||o||213.161.242.106||o||239f33743e4a070b728d4dcbd1091f1a The script I am trying is : awk -v s1="||o||" ' FNR==NR{ a[$9]=$1 s1 $5; b[$9]=$13 s1 $17 s1 $21; c[$9]=$0; next } ($1 in a){ val=$1; $1=""; sub(/:/,""); print a[val] s1 $0 s1 b[val]; d[val]=$0; next } END{ for(i in d){ delete c[i] }; for(j in c){ print c[j] > "left.txt" }} ' FS="|" 1.txt FS=":" OFS=":" 2.txt > result.txt But it is giving me empty result.txt I am facing difficulty in debugging the issue. Any help would be highly appreciated.
Try following awk(completely based on your shown Input_file(s) and considering that your 2.txt will not have any duplicates on it too) and let me know if this helps you. awk 'FNR==NR{a[$NF]=$0;next} $1~/:/{sub(/:/,"",$1);flag=1} ($1 in a){val=$1;if($0 ~ /:/ && !flag){sub(/[^:]*/,"");sub(/:/,"")};print a[val] OFS $0 > "result.txt";flag="";delete a[val]} END{for(i in a){print a[i]>"left.txt"}}' FS="|" 1.txt FS=" " OFS="||o||" 2.txt Output will be 2 files named results.txt and left.txt. Will add non-one liner form and explanation too for above code shortly. Adding a non-one liner form of solution too now. awk ' FNR==NR{ ##FNR and NR both are awk out of the box variables and they denote line numbers in Input_file(s), difference between them is FNR value will be RESET when it complete reading 1 Input_file and NR value will be keep increasing till it completes reading all the Input_file(s). a[$NF]=$0; ##Creating an array named a whose index is $NF(value of last field of current line) and value is current line. next ##next is awk out of the box keyword which will skip all further statements now. } $1~/:/{ ##Checking condition here if current lines 1st field has a colon in it then do following: sub(/:/,"",$1); ##Using sub function of awk which will substitute colon with NULL of 1st field of current line of current Input_file. flag=1 ##Setting a variable named flag here(basically to make sure that 1st colon is substituted so need for another colon removal. } ($1 in a){ ##Checking a condition here if current line $1 is present in array a then do following: val=$1; ##Setting variable named val value to $1 here. if($0 ~ /:/ && !flag){ ##Checking condition here if current line is having colon and variable flag is NOT NULL then do following: sub(/[^:]*/,""); ##Substituting all the values from starting to till colon comes with NULL. sub(/:/,"")}; ##Then substituting only 1 colon here. print a[val] OFS $0 > "result.txt"; ##printing the value of array a whose index is variable val OFS(output field separator) current line values to output file named results.txt here. flag=""; ##Unsetting the value of variable flag here. delete a[val] ##Deleting the value of array a whose index is variable val here. } END{ ##Starting end section of this awk program here. which will be executed once all Input_file(s) have been read. for(i in a){ ##Traversing through the array a now. print a[i]>"left.txt"} ##Printing the value of array a(which will basically provide those values which are NOT matched in both files) in left.txt file. } ' FS="|" 1.txt FS=" " OFS="||o||" 2.txt ##Setting FS="|" for 1.txt Input_file and then setting FS=" " and OFS="||o||" for 2.txt Input_file, 1.txt and 2.txt are Input_files for this program to run.
This awk script may also help. $ awk 'BEGIN{FS="\|";OFS="|"}NR==FNR{data[$1]=$2;} NR!=FNR{if($NF in data){ $NF=data[$NF];print >"result.txt" }else{ print >"left.txt"} }' <( sed 's/\s*:\s*/|/' 2.txt) 1.txt 2>/dev/null Output $ cat result.txt gera077||o||emi_riv_90#hotmail.com||||200.45.113.254||o||: demo glen-666||o||glen-666#hotmail.com||||84.196.42.167||o||12883 #: "# menopause |||totoche#wanadoo.fr||o||83.193.209.52||o||123422 $ cat left.txt okan1993||||killa-o#hotmail.de||||84.141.125.140||o||69c1cb5ddbc66cceebe0dddba3eddf68 Tosiunia||||tosia_19#amorki.pl||o||83.22.193.86|||||ddcbba2076646980391cb4971b8030 DREP Page 1 Sheyes1 ||||summer_faerie_dustyrose#yahoo.com||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| . BenPhynix||||BenPhynix#aol.de||||| 62.226.181.57||||11dea24f1caebb012e11285579050f38 jonof.|o||joflem#medi3.no||o||213.161.242.106||o||239f33743e4a070b728d4dcbd1091f1a We have preprocessed the first file - using sed - to make its field delimiter | and used process substitution to pass the result to awk.
Matching files using awk in linux
I have 2 files: 1.txt: e10adc3949ba59abbe56e057f20f883e f8b46e989c5794eec4e268605b63eb59 e3ceb5881a0a1fdaad01296d7554868d 2.txt: e10adc3949ba59abbe56e057f20f883e:1111 679ab793796da4cbd0dda3d0daf74ec1:1234 f8b46e989c5794eec4e268605b63eb59:1#/233: I want 2 files as output: One is result.txt which contains lines from 2.txt whose match is in 1.txt and another is left.txt which contains lines from 1.txt whose match is not in 2.txt Expected output of both files is below: result.txt e10adc3949ba59abbe56e057f20f883e:1111 f8b46e989c5794eec4e268605b63eb59:1#/233: left.txt e3ceb5881a0a1fdaad01296d7554868d I tried 1-2 approaches with awk but not succeeded. Any help would be highly appreciated. My script: awk ' FNR==NR{ val=$1; sub(/[^:]*/,""); sub(/:/,""); a[val]=$0; next } !($NF in a){ print > "left.txt"; next } { print $1,$2,a[$NF]> "result.txt" } ' FS=":" 2.txt FS=":" OFS=":" 1.txt
Following awk may help you in same. awk 'FNR==NR{a[$1]=$0;next} ($0 in a){print a[$0] > "results.txt";next} {print > "left.txt"}' FS=":" OFS=":" 2.txt FS=" " OFS=":" 1.txt EDIT: Adding explanation of code too here. awk ' FNR==NR{ ##FNR==NR condition will be TRUE when first Input_file is being read by awk. Where FNR and NR are the out of the box variables for awk. a[$1]=$0; ##creating an array named a whose index is $1 and value is $2 from 2.txt Input_file. next ##next is out of the box keyword from awk and will skip all further statements of awk. } ($0 in a){ ##Checking here condition if current line of Input_file 1.txt is present in array named a then do following. print a[$0] > "results.txt"; ##Printing the current line into output file named results.txt, since current line is coming in array named a(which was created by 1st file). next ##next is awk keyword which will skip further statements for awk code now. } { print > "left.txt" ##Printing all lines which skip above condition(which means they did not come into array a) to output file named left.txt as per OP need. } ' FS=":" OFS=":" 2.txt FS=" " OFS=":" 1.txt ##Setting FS(field separator) as colon for 2.txt and Setting FS to space for 1.txt here. yes, we could set multiple field separators for different Input_file(s).
How about this one: awk 'BEGIN{ FS = ":" }NR==FNR{ a[$0]; next }$1 in a{ print $0 > "results.txt"; delete a[$1]; next }END{ for ( i in a ) print i > "left.txt" }' 1.txt 2.txt Output: results.txt e10adc3949ba59abbe56e057f20f883e:1111 f8b46e989c5794eec4e268605b63eb59:1#/233: left.txt e3ceb5881a0a1fdaad01296d7554868d
Linux combine multiple files
I have multiple files has to combine. This is for example two files. 1.txt Allele Sequence B*07:02:01 ABCDE B*07:33:01 ABCD B*07:41 AB 2.txt Allele Sequence B*07:02:01 FGHIJ B*07:33:01 EFGH B*07:41 CD Delimited is Tab(\t) I wants to get result like B*07:02:01 ABCDEFGHIJ B*07:33:01 ABCDEFGH B*07:41 ABCD I tried like below. awk -F"\t" ' {key = $1} FNR==NR {line[key]=$0; next} key in line {print line[$1], $2} ' $1 $2 > output_2.txt Then result like Allele Sequence^M Sequence^M B*07:02:01 ABCDE^M FGHIJ B*07:33:01 ABCD^M EFGH B*07:41 AB^M CD How can I make more clear and exactly what I want Thanks!
This may work: awk 'FNR==NR {a[$1]=$2;next} FNR>1{print $0 a[$1]} ' 2.txt 1.txt B*07:02:01 ABCDEFGHIJ B*07:33:01 ABCDEFGH B*07:41 ABCD How it works: awk ' FNR==NR { # For first file only (2.txt) a[$1]=$2 # Read data in to array a using $1 as key and $2 as value next} # Skip to next record FNR>1{ # Skip first record of second file (1.txt) print $0 a[$1]} # Print complete record from 1.txt, and data from array using $1 as key ' 2.txt 1.txt # read the files
awk -F"\t" ' {gsub("\r",""); key = $1} FNR==NR {line[key]=$0; next} key in line {print line[$1]$2} ' 1d.txt 2d.txt > x gsub will remove the "^M" you have problems with (it is chr(13)=CR, as your input files are from DOS/windows worlds which use CRLF, and you're processing then on UN*X, which uses only LF as a line separator) removing "," in line[$1]$2 will remove space
join 1.txt 2.txt | awk '{print $1, $2 $3}'
how can i compare two text files which has multiple fields in unix
i have two text files file 1 number,name,account id,vv,sfee,dac acc,TDID 7000,john,2,0,0,1,6 7001,elen,2,0,0,1,7 7002,sami,2,0,0,1,6 7003,mike,1,0,0,2,1 8001,nike,1,2,4,1,8 8002,paul,2,0,0,2,7 file 2 number,account id,dac acc,TDID 7000,2,1,6 7001,2,1,7 7002,2,1,6 7003,1,2,1 i want to compare those two text files. if the four columns of file 2 is there in file 1 and equal means i want output like this 7000,john,2,0,0,1,6 7001,elen,2,0,0,1,7 7002,sami,2,0,0,1,6 7003,mike,1,0,0,2,1 nawk -F"," 'NR==FNR {a[$1];next} ($1 in a)' file2.txt file1.txt.. this works good for comparing two single column in two files. i want to compare multiple column. any one have suggestion? EDIT: From the OP's comments: nawk -F"," 'NR==FNR {a[$1];next} ($1 in a)' file2.txt file1.txt .. this works good for comparing two single column in two files. i want to compare multiple column. you have any suggestion?
This awk one-liner works for multi-column on unsorted files: awk -F, 'NR==FNR{a[$1,$2,$3,$4]++;next} (a[$1,$3,$6,$7])' file1.txt file2.txt In order for this to work, it is imperative that the first file used for input (file1.txt in my example) be the file that only has 4 fields like so: file1.txt 7000,2,1,6 7001,2,1,7 7002,2,1,6 7003,1,2,1 file2.txt 7000,john,2,0,0,1,6 7000,john,2,0,0,1,7 7000,john,2,0,0,1,8 7000,john,2,0,0,1,9 7001,elen,2,0,0,1,7 7002,sami,2,0,0,1,6 7003,mike,1,0,0,2,1 7003,mike,1,0,0,2,2 7003,mike,1,0,0,2,3 7003,mike,1,0,0,2,4 8001,nike,1,2,4,1,8 8002,paul,2,0,0,2,7 Output $ awk -F, 'NR==FNR{a[$1,$2,$3,$4]++;next} (a[$1,$3,$6,$7])' file1.txt file2.txt 7000,john,2,0,0,1,6 7001,elen,2,0,0,1,7 7002,sami,2,0,0,1,6 7003,mike,1,0,0,2,1 Alternatively, you could also use the following syntax which more closely matches the one in your question but is not very readable IMHO awk -F, 'NR==FNR{a[$1,$2,$3,$4];next} ($1SUBSEP$3SUBSEP$6SUBSEP$7 in a)' file1.txt file2.txt
TxtSushi looks like what you want. It allows to work with CSV files using SQL.
It's not an elegant one-liner, but you could do it with perl. #!/usr/bin/perl open A, $ARGV[0]; while(split/,/,<A>) { $k{$_[0]} = [#_]; } close A; open B, $ARGV[1]; while(split/,/,<B>) { print join(',',#{$k{$_[0]}}) if defined($k{$_[0]}) && $k{$_[0]}->[2] == $_[1] && $k{$_[0]}->[5] == $_[2] && $k{$_[0]}->[6] == $_[3]; } close B;
Quick answer: Use cut to split out the fields you need and diff to compare the results.
Not really well tested, but this might work: join -t, file1 file2 | awk -F, 'BEGIN{OFS=","} {if ($3==$8 && $6==$9 && $7==$10) print $1,$2,$3,$4,$6,$7}' (Of course, this assumes the input files are sorted).
This is neither efficient nor pretty it will however get the job done. It is not the most efficient implementation as it parses file1 multiple times however it does not read the entire file into RAM either so has some benefits over the simple scripting approaches. sed -n '2,$p' file1 | awk -F, '{print $1 "," $3 "," $6 "," $7 " " $0 }' | \ sort | join file2 - |awk '{print $2}' This works as follows sed -n '2,$p' file1 sends file1 to STDOUT without the header line The first awk command prints the 4 "key fields" from file1 in the same format as they are in file2 followed by a space followed by the contents of file1 The sort command ensures that file1 is in the same order as file2 The join command joins file2 and STDOUT only writing records that have a matching record in file2 The final awk command prints just the original part of file1 In order for this to work you must ensure that file2 is sorted before running the command. Running this against your example data gave the following result 7000,john,2,0,0,1,6 7001,elen,2,0,0,1,7 7002,sami,2,0,0,1,6 7003,mike,1,0,0,2,1 EDIT I note from your comments you are getting a sorting error. If this error is occuring when sorting file2 before running the pipeline command then you could split the file, sort each part and then cat them back together again. Something like this would do that for you mv file2 file2.orig for i in 0 1 2 3 4 5 6 7 8 9 do grep "^${i}" file2.orig |sort > file2.$i done cat file2.[0-9] >file2 rm file2.[0-9] file2.orig You may need to modify the variables passed to for if your file is not distributed evenly across the full range of leading digits.
The statistical package R handles processing multiple csv tables really easily. See An Intro. to R or R for Beginners.