file_1 contents:
aaa 111 222 333
bbb 444 555 666
ccc 777 888 999
file_2 contents:
ddd
eee
fff
how do i copy only part of the text from file_1 to file_2
so that file_2 would become:
ddd 111 222 333
eee 444 555 666
fff 777 888 999
Try with awk:
awk 'NR==FNR{a[FNR]=$2FS$3FS$4;next} {print $0, a[FNR]}' file_1 file_2
Explanation:
NR is the current input line, FNR is the number of input line in current file, you can see that by
$ awk '{print NR,FNR}' file_1 file_2
1 1
2 2
3 3
4 1
5 2
6 3
So, the condition NR==FNR is only true when reading the first file, and that's when the columns $2, $3, and $4 get saved in a[FNR]. After reading file_1, the condition NR==FNR becomes false and the block {print $0, a[FNR]} is executed, where $0 is the whole line in file_2.
Related
I want to search every line of search_file in keyword_file and print an output_file replacing the incorrect string of the line with the correct string extracted from keyword_file. Also, should warn the user if there are few entries with missing 3rd column, which do not exist in keyword_file and have in search_file (for example, "ggg coms" in the file)
Note, here keyword_file may contain an unequal number of line compared to search_file. For example:
search_file
aaa coms 123
bbb coms 234
ccc
ddd coms 456
eez coms 789
fkk coms 987
ggg coms
hhh coms 989
....
keyword_file
aaa coms 789
bbb coms 234
ccc coms 878
ddd coms 456
ttt coms 654
eee coms 789
Output
aaa coms 789
bbb coms 234
ccc coms 878
ddd coms 456
eez coms 789
fkk coms 987
hhh coms 989
....
I tried the following awk command, but it was not able to retain column #1 entries of search_file in the Output.
awk 'FNR==NR{a[$1]=$0} FNR!=NR&&a[$1]{print $1,$2,$3}' search_file keyword_file
Thank you in advanced for your help :)
Could you please try following, written and tested based on shown samples only.
awk '
{
key=$1
}
FNR==NR{
a[key]=$3
next
}
(key in a){
$0=key OFS $2 OFS a[key]
}
1
' keyword_file search_file
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
{
key=$1 ##Run this command on each line of Input_file and create variable key with value of 1st field.
}
FNR==NR{ ##Checking condition if FNR==NR which will be TRUE when keyword_file is being read.
a[key]=$3 ##Creating array a with index key and value of 3rd field here.
next ##next will skip all further statements from here.
}
(key in a){ ##Checking condition if key is present in array a then do following.
$0=key OFS $2 OFS a[key] ##Setting value of key OFS 2nd field OFS array a value with index key here.
}
1 ##1 will print edited/non-edited values for all lines.
' keyword_file search_file ##Mentioning Input_file names here.
Why OP's code didn't work: You were close you only printed lines where first and second fields are common in both the Input_files so what I did is: while checking condition if fields are common in both Input_files then re-create the line with new last value and then by mentioning 1 printing the current(edited/non-edited) lines.
I am trying to compare two files and append another column if there is certain condition satisfied.
file1.txt
1 101 111 . BCX 123
1 298 306 . CCC 234
1 299 305 . DDD 345
file2.txt
1 101 111 BCX P1#QQQ
1 299 305 DDD P2#WWW
The output should be:
1 101 111 . BCX 123;P1#QQQ
1 298 306 . CCC 234
1 299 305 . DDD 345;P2#WWW
What I can do is, to only do this for the lines having a match:
awk 'NR==FNR{ a[$1,$2,$3,$4]=$5; next }{ s=SUBSEP; k=$1 s $2 s $3 s $5 }k in a{ print $0,a[k] }' file2.txt file1.txt
1 101 111 . BCX 123 P1#QQQ
1 299 305 . DDD 345 P2#WWW
But then, I am missing the second line in file1.
How can I still keep it even though there is no match with file2 regions?
If you want to print every line, you need your print command not to be limited by your condition.
awk '
NR==FNR {
a[$1,$2,$3,$4]=$5; next
}
{
s=SUBSEP; k=$1 s $2 s $3 s $5
}
k in a {
$6=$6 ";" a[k]
}
1' file2.txt file1.txt
The 1 is shorthand that says "print every line". It's a condition (without command statements) that always evaluates "true".
The k in a condition simply replaces your existing 6th field with the concatenated one. If the condition is not met, the replacement doesn't happen, but we still print because of the 1.
Following awk may help you in same.
awk 'FNR==NR{a[$1,$2,$3,$4]=$NF;next} (($1,$2,$3,$5) in a){print $0";"a[$1,$2,$3,$5];next} 1' file2.txt file1.txt
Output will be as follows.
1 101 111 . BCX 123;P1#QQQ
1 298 306 . CCC 234
1 299 305 . DDD 345;P2#WWW
another awk
$ awk ' {t=5-(NR==FNR); k=$1 FS $2 FS $3 FS $t}
NR==FNR {a[k]=$NF; next}
k in a {$0=$0 ";" a[k]}1' file2 file1
1 101 111 . BCX 123;P1#QQQ
1 298 306 . CCC 234
1 299 305 . DDD 345;P2#WWW
last component of the key is either 4th or 5th field based on first or second file input; set it accordingly and use a single k variable in the script. Note that
t=5-(NR==FNR)
can be written as conventionally,
t=NR==FNR?4:5
I have two files, file1 and file2. I want to compare several columns - $1,$2 ,$3 and $4 of file1 with several columns $1,$2, $3 and $4 of file2 and print those rows of file2 that do not match any row in file1.
E.g.
file1
aaa bbb ccc 1 2 3
aaa ccc eee 4 5 6
fff sss sss 7 8 9
file2
aaa bbb ccc 1 f a
mmm nnn ooo 1 d e
aaa ccc eee 4 a b
ppp qqq rrr 4 e a
sss ttt uuu 7 m n
fff sss sss 7 5 6
I want to have as output:
mmm nnn ooo 1 d e
ppp qqq rrr 4 e a
sss ttt uuu 7 m n
I have seen questions asked here for finding those that do match and printing them, but not viceversa,those that DO NOT match.
Thank you!
Use the following script:
awk '{k=$1 FS $2 FS $3 FS $4} NR==FNR{a[k]; next} !(k in a)' file1 file2
k is the concatenated value of the columns 1, 2, 3 and 4, delimited by FS (see comments), and will be used as a key in a search array a later. NR==FNR is true while reading file1. I'm creating the array a indexed by k while reading file1.
For the remaining lines of input I check with !(k in a) if the index does not exists in a. If that evaluates to true awk will print that line.
here is another approach if the files are sorted and you know the used char set.
$ function f(){ sed 's/ /~/g;s/~/ /4g' $1; }; join -v2 <(f file1) <(f file2) |
sed 's/~/ /g'
mmm nnn ooo 1 d e
aaa ccc eee 4 a b
ppp qqq rrr 4 e a
sss ttt uuu 7 m n
fff sss sss 7 5 6
create a key field by concatenating first four fields (with a ~ char, but any unused char can be used), use join to find the unmatched entries from file2 and partition the synthetic key field back.
However, the best way is to use awk solution with a slight fix
$ awk 'NR==FNR{a[$1,$2,$3,$4]; next} !(($1,$2,$3,$4) in a)' file1 file2
No doubt that the awk solution from #hek2mgl is better than this one, but for information this is also possible using uniq, sort, and rev:
rev file1 file2 | sort -k3 | uniq -u -f2 | rev
rev is reverting both files from right to left.
sort -k3 is sorting lines skipping the 2 first column.
uniq -u -f2 prints only lines that are unique (skipping the 2 first while comparing).
At last the rev is reverting back the lines.
This solution sorts the lines of both files. That might be desired or not.
I have this input
file 1 file 2
A 10 222 77.11 11
B 20 2222 1.215 22
C 30 22222 12.021 33
D 40 222222 145.00 44
The output I need is (11+22+33+44)- (10+20+30+40) = 110-100=10
thank you in advance for your help
Here you go:
paste file1.txt file2.txt | awk '{s1+=$5; s2+=$2} END {print s1-s2}'
Or better yet (clever of #falsetru's answer to do the summing with a single variable):
paste file1.txt file2.txt | awk '{sum+=$5-$2} END {print sum}'
If you want to work with column N in file1 and column M in file2, this might be "easier", but less efficient:
paste <(awk '{print $N}' file1) <(awk '{print $M}' file2.txt) | awk '{sum+=$2-$1} END {print sum}'
It's easier in the sense that you don't have to count the right position of the column in the second file, but less efficient because of the added extra awk sub-processes.
Using awk:
$ cat test.txt
file 1 file 2
A 10 222 77.11 11
B 20 2222 1.215 22
C 30 22222 12.021 33
D 40 222222 145.00 44
$ tail -n +2 test.txt | awk '{s += $5 - $2} END {print s}'
10
I need to prepend a line counter to lines matching specific patterns in a file, while still outputting the lines that do not match this pattern.
For example, if my file looks like this:
aaa 123
bbb 456
aaa 666
ccc 777
bbb 999
and the patterns I want to count are 'aaa' and 'ccc', I'd like to get the following output:
1:aaa 123
bbb 456
2:aaa 666
3:ccc 777
bbb 999
Preferably I'm looking for a Linux one-liner. Shell or tool doesn't matter as long it's installed by default in most distros.
With awk:
awk '{if ($1=="aaa" || $1=="ccc") {a++; $0=a":"$0}} {print}' file
1: aaa 123
bbb 456
2: aaa 666
3: ccc 777
bbb 999
Explanation
Loop through lines checking whether first field is aaa or ccc. If so, append the line ($0) with the variable a and auto increment it. Finally, print the line in all cases: if the pattern was matched will have a in the beginning, otherways just the original line.
Use the following code. The following approach is in perl
open FH,"<abc.txt";
$incremental_val = 1;
while(my $line = <FH>){
chomp($line);
if($line =~ m/^aaa / || $line =~ m/^ccc /){
print "$incremental_val : $line\n";
$incremental_val++;
next;
}
print "$line\n";
}
close FH;
The output will be as follows.
1 : aaa 123
bbb 456
2 : aaa 666
3 : ccc 777
bbb 999