read different fields and pass on to awk to extract those fields - linux

Probably this is answered somewhere, but the things I have explored is not matching my need.
I would like to read different fields from one file (FILE1) and pass this on to a awk script, which can extract those fields from another file (FILE2).
FILE1
1 156202173 156702173
2 26915624 27415624
4 111714419 112214419
so read lines from this file and pass it on to the following script
awk ' BEGIN {FS=OFS="\t"};
{if ($1==$1 && $2>= $2 && $2<= $3 ) {print $0}}' FILE2 > extracted.file
The FILE2 looks like this;
1 156202182 rs7929618
16 8600861 rs7190157
4 111714800 rs12364336
12 3840048 rs4766166
7 20776538 rs35621824
so the awk script print only when there is a match with the first field and the value falls between the 2nd and 3rd field.
Expected output is
1 156202182 rs7929618
4 111714800 rs12364336
Thanks so much in advance for your response.

there should be plenty of similar questions but writing the script is faster than looking up.
$ awk 'NR==FNR{lower[$1]=$2; upper[$1]=$3; next}
lower[$1]<$2 && $2<upper[$1]' file1 file2
1 156202182 rs7929618
4 111714800 rs12364336

Related

How to combine all files in a directory, adding their individual file names as a new column in final merged file

I have a directory with files that looks like this:
CCG02-215-WGS.format.flt.txt
CCG05-707-WGS.format.flt.txt
CCG06-203-WGS.format.flt.txt
CCG04-967-WGS.format.flt.txt
CCG05-710-WGS.format.flt.txt
CCG06-215-WGS.format.flt.txt
Contents of each files look like this
1 9061390 14 93246140
1 58631131 2 31823410
1 108952511 3 110694548
1 168056494 19 23850376
etc...
Ideal output would be a file, let's call it all-samples.format.flt.txt, that would contain the concatenation of all files, but an additional column that displays which sample/file the row came from ( some minor formatting involved to remove the .format.flt.txt ):
1 9061390 14 93246140 CCG02-215-WGS
...
1 58631131 2 31823410 CCG05-707-WGS
...
1 108952511 3 110694548 CCG06-203-WGS
...
1 168056494 19 23850376 CCG04-967-WGS
Currently, I have the following code which works for individual files.
awk 'BEGIN{OFS="\t"; split(ARGV[1],f,".")}{print $1,$2,$3,$4,f[1]}' CCG05-707-WGS.format.flt.txt
#OUTPUT
1 58631131 2 31823410 CCG05-707-WGS
...
However, when I try to apply it to all files, using the star, it adds the first filename it finds to all the files as the 4th column.
awk 'BEGIN{OFS="\t"; split(ARGV[1],f,".")}{print $1,$2,$3,$4,f[1]}' *
#OUTPUT, 4th column should be as seen in previous code block
1 9061390 14 93246140 CCG02-215-WGS
...
1 58631131 2 31823410 CCG02-215-WGS
...
1 108952511 3 110694548 CCG02-215-WGS
...
1 168056494 19 23850376 CCG02-215-WGS
I feel like the solution may just lie in adding an additional parameter to awk... but I'm not sure where to start.
Thanks!
UPDATE
Using OOTB awk var FILENAME solved the issue, plus some elegant formatting logic for the file names.
Thank #RavinderSingh13!
awk 'BEGIN{OFS="\t"} FNR==1{file=FILENAME;sub(/..*/,"",file)} {print $0,file}' *.txt
With your shown samples please try following awk code. We need to use FILENAME OOTB variable here of awk. Then whenever there is first line of any txt file(all txt files passed to this program) then remove everything from . to till last of value and in main program printing current line followed by file(file's name as per requirement)
awk '
BEGIN { OFS="\t" }
FNR==1{
file=FILENAME
sub(/\..*/,"",file)
}
{
print $0,file
}
' *.txt
OR in a one-liner form try following awk code:
awk 'BEGIN{OFS="\t"} FNR==1{file=FILENAME;sub(/\..*/,"",file)} {print $0,file}' *.txt
You may use:
Any version awk:
awk -v OFS='\t' 'FNR==1{split(FILENAME, a, /\./)} {print $0, a[1]}' *.txt
Or in gnu-awk:
awk -v OFS='\t' 'BEGINFILE{split(FILENAME, a, /\./)} {print $0, a[1]}' *.txt

Splitting file based on first column's first character and length

I want to split a .txt into two, with one file having all lines where the first column's first character is "A" and the total of characters in the first column is 6, while the other file has all the rest. Searching led me to find the awk command and ways to separate files based on the first character, but I couldn't find any way to separate it based on column length.
I'm not familiar with awk, so what I tried (to no avail) was awk -F '|' '$1 == "A*****" {print > ("BeginsWithA.txt"); next} {print > ("Rest.txt")}' FileToSplit.txt.
Any help or pointers to the right direction would be very appreciated.
EDIT: As RavinderSingh13 reminded, it would be best for me to put some samples/examples of input and expected output.
So, here's an input example:
#FileToSplit.txt#
2134|Line 1|Stuff 1
31516784|Line 2|Stuff 2
A35646|Line 3|Stuff 3
641|Line 4|Stuff 4
A48029|Line 5|Stuff 5
A32100|Line 6|Stuff 6
413|Line 7|Stuff 7
What the expected output is:
#BeginsWith6.txt#
A35646|Line 3|Stuff 3
A48029|Line 5|Stuff 5
A32100|Line 6|Stuff 6
#Rest.txt#
2134|Line 1|Stuff 1
31516784|Line 2|Stuff 2
641|Line 4|Stuff 4
413|Line 7|Stuff 7
What you want to do is use a regex and length function. You don't show your input, so I will leave it to you to set the field separator. Given your description, you could do:
awk '/^A/ && length($1) == 6 { print > "file_a.txt"; next } { print > "file_b.txt" }' file
Which would take the information in file and if the first field begins with "A" and is 6 characters in length, the record is written to file_a.txt, otherwise the record is written to file_b.txt (adjust names as needed)
A non-regex awk solution:
awk -F'|' '{print $0>(index($1,"A")==1 && length($1)==6 ? "file_a.txt" : "file_b.txt")}' file
With your shown samples, could you please try following. Since your shown samples are NOT started from A so I have not added that Logic here, also this solution makes sure 1st field is all 6 digits long as per shown samples.
awk -F'|' '$1~/^[0-9]+$/ && length($1)==6{print > ("BeginsWith6.txt");next} {print > ("rest.txt")}' Input_file
2nd solution: In case your 1st field starts from A following with 5 digits(which you state but not there in your shown samples) then try following.
awk -F'|' '$1~/^A[0-9]+$/ && length($1)==6{print > ("BeginsWith6.txt");next} {print > ("rest.txt")}' Input_file
OR(better version of above):
awk -F'|' '$1~/^A[0-9]{5}$/{print > ("BeginsWith6.txt");next} {print > ("rest.txt")}' Input_file

replace pattern in file 2 with pattern in file 1 if contingency is met

I have two tab delimted data files the file1 looks like:
cluster_j_72 cluster-32 cluster-32 cluster_j_72
cluster_j_75 cluster-33 cluster-33 cluster_j_73
cluster_j_8 cluster-68 cluster-68 cluster_j_8
the file2 looks like:
NODE_148 67545 97045 cluster-32
NODE_221 1 42205 cluster-33
NODE_168 1 24506 cluster-68
I would like to confirm that, for a given row, in file1 columns 2 and 3; as well as 1 and 4 are identical. If this is the case then I would like to take the value for that row from column 2 (file 1) find it in file2 and replace it with the value from column 1 (file 1). Thus the new output of file 2 would look like this (note because column 1 and 4 dont match for cluster 33 (file1) the pattern is not replaced in file2):
NODE_148 67545 97045 cluster_j_72
NODE_221 1 42205 cluster-33
NODE_168 1 24506 cluster_j_8
I have been able to get the contingency correct (here printing the value from file1 i'd like to use to replace a value in file2):
awk '{if($2==$3 && $1==$4){print $1}}'file1
If I could get sed to draw values ($2 and $1) from file1 while looking in file 2 this would work:
sed 's/$2(from file1)/$1(from file1)/' file2
But I don't seem to be able to nest this sed in the previous awk statement, nor get sed to look for a pattern originating in a different file than it's looking in.
thanks!
You never need sed when you're using awk since awk can do anything that sed can do.
This might be what you're trying to do:
$ cat tst.awk
BEGIN { FS=OFS="\t" }
NR==FNR {
if ( ($1 == $4) && ($2 == $3) ) {
map[$2] = $1
}
next
}
$4 in map { $4 = map[$4] }
{ print }
$ awk -f tst.awk file1 file2
NODE_148 67545 97045 cluster_j_72
NODE_221 1 42205 cluster-33
NODE_168 1 24506 cluster_j_8

how to Merge 2 tables with awk

First of all, sorry for my English and I know there's a lot of various topics regarding AWK but it's a very difficult function to me...
I would like to merge two tables using common columns with awk. The tables differ in the amount of rows. I have my first table that I want to modify and the second as a reference table. I would like to compare my colunme1.F1 with my column1.F2. When it matches, add the column2.F2 in my file1. But I need to keep all my lines in file1.
I give you an example:
File1
Num_id,Name,description1,description2,description3
?,atlanta_1,,,
RO_5,babeni_SW,,,
? ,Bib1,,,
RO_9,BoUba_456,,,
?,Castor,,,
File2
official_Num_id,official_Name
RO_1,America
RO_2,Andre
RO_3,Atlanta
RO_4,Axa
RO_5,Babeni
RO_6,Barba
RO_7,Bib
RO_8,Bilbao
RO_9,Bouba
RO_10,Castor
File3
Num_id,Name,description1,description2,description3,official_Name
?,atlanta_1,,,
RO_5,babeni_SW,,,Babeni
?,Bib1,,,
RO_9,BoUba_456,,,Bouba
?,Castor,,,
I read a lot of solution on Internet and it seems that awk could work ..
I tried awk 'NR==FNR {h[$1] = $2; next} {print $0,h[$1]}' $File1 $File2 > file3
But my command doesn't work, my File3 looks exactly that File1.
In a second time, I don't know if it's possible to compare my two second columns when names have difference like atlanta_1 and Atlanta and add the official_num_id and the official_name in my File1.
Any hero over there?
You had it, except for two small things. First you need to set your file separators to , and, second, reverse the order of your input files on the command line so that the reference file is processed first:
$ awk 'BEGIN {FS=OFS=","} NR==FNR {h[$1] = $2; next} {print $0,h[$1]}' File2 File1
Num_id,Name,description1,description2,description3,
?,atlanta_1,,,,
RO_5,babeni_SW,,,,Babeni
? ,Bib1,,,,
RO_9,BoUba_456,,,,Bouba
?,Castor,,,,
You can also use the join command for this:
join --header --nocheck-order -t, -1 1 -2 1 -a 1 file1 file2
To answer your question if it's possible to compare my two second columns when names have difference like atlanta_1 and Atlanta and add the official_num_id and the official_name in my File1:
$ awk '
BEGIN { FS=OFS="," }
NR==FNR { # file2
a[tolower($2)]=$0 # hash on lowercase city
next
}
{ # file1
split($2,b,"[^[:alpha:]]") # split on non-alphabet
print $0 (tolower(b[1]) in a?OFS a[tolower(b[1])]:"")
}' file2 file1
Num_id,Name,description1,description2,description3
?,atlanta_1,,,,RO_3,Atlanta
RO_5,babeni_SW,,,,RO_5,Babeni
? ,Bib1,,,,RO_7,Bib
RO_9,BoUba_456,,,,RO_9,Bouba
?,Castor,,,,RO_10,Castor
split will split Name field on non-alphabetic characters, ie _ in atlanta_1, 1 in Bib1 etc. so it might fail on cities with dashes etc., edit the pattern [^[:alpha:]] in split accordingly. Header doesn't match with those names, rethink the header names.

how to read a file in awk command

I have two files that look like:
**file1.txt**
"a","1","11","111"
"b","2","22","222"
"c","3","33","333"
"d","4","44","444"
"e","5","55","555"
"f","6","66","666"
**file2.txt**
"b"
"d"
"a"
"c"
"e"
"f"
I need to create a script that changes the order of file1 and begin with the order of file2. e.g.:
"b","2","22","222"
"d","4","44","444"
"a","1","11","111"
"c","3","33","333"
"e","5","55","555"
"f","6","66","666"
I created a command that looks like:
nawk '/^("b")/' file1 ; nawk '/^("d")/' file1 ; nawk '/^("a")/' file1 ; nawk '/^("c")/' file1 ; nawk '/^("e")/' file1 ; nawk '/^("f")/' file1
It does the trick, however I would like to further automate it, but don't know how to proceed. How could I create a command or variable that would look at line 1 of file2("b") and put it the above command, then look at line 2 of file2("d"), and put it in the above command, and so on. Basically if possible, I would like the command to look at file 2 and fill in the blanks in the above command. Any other more convenient commands you guys can suggest would be appreciated. Note that I currently have to manually insert the letters from file 2 in the above command.
The actual file may contain well over 100 lines
awk -F, 'NR==FNR { a[$1]=$0; next }
($1 in a) { print a[$1] }' file1 file2
This reads all of file1 into memory, then prints in the order of file2. If file1 is very large, this may not be feasible.
This is a common Awk idiom; search the many near-duplicates if you need a more detailed explanation.

Resources