Tried searching but could not find anything substancial
I have 2 files:
1:
asdfdata:tomatch1:asdffdataaa
asdfdata2:tomatch2:asdffdata33
asdf:tomatch3:asdfx
2:
bek:tomatch1:beke
lek:tomatch3:lekee
wen:tomatch2:wenne
I would like to match by the second clolumn in both, by whatever data is on the line, then take this and print to lines like so:
asdfdata:tomatch1:asdffdataaa:bek:beke
asdfdata2:tomatch2:asdffdata33:wen:wenne
etc.
I imagine awk would be best, Match two files by column line by line - no key it seems kind of similiar to this!
Thank you for any help!!
Use join command like:
join -t":" -1 2 -2 2 <(sort -t":" -k 2 file1.txt) <(sort -t":" -k 2 file2.txt)
Here's how it would work:
-t is for delimeter
-1 - from first file second field delimeted by ":"
-2 - from second file second field delimeted by ":"
join needs input file to be sorted on field which we want to join by hence you see sort command with second field specified with -k option and t option again using delimeter as colon (:) and passed input to join command after sorting the input by second field.
I think this is most simple with join and sort. Assuming bash (for the process substitution):
join -t : -j 2 <(sort -t : -k 2 file1) <(sort -t : -k 2 file2)
Alternatively, with awk (if bash cannot be relied upon and temporary files are not wanted):
awk -F : 'NR == FNR { a[$2] = $0; next } { line = a[$2] FS $1; for(i = 3; i <= NF; ++i) line = line FS $i; print line }' file1 file2
That is
NR == FNR { # while processing the first file
a[$2] = $0 # remember lines by key
next
}
{ # while processing the second file
line = a[$2] FS $1 # append first field to remembered line
# from the first file with the current key
for(i = 3; i <= NF; ++i) { # append all other fields (except the second)
line = line FS $i
}
print line # print result
}
This might work for you (GNU sed):
sed -r 's|(.*)(:.*:)(.*)|/\2/s/$/:\1:\3/|' file2 | sed -f - file1
This constructs a sed script from the file2 to run against file1.
Related
i have file.txt like this
It is necessary to use a script or command to make it look like this:
i try sort -k2,2nr file.txt it works, but I also change the column in the middle)
maybe you can help me with something, I also know that AWK can work with the column specified in $, but I can't understand how to do it correctly
Suggesting this simplified awk script:
script.awk
{ # read each line from input file, NR is internal variable `Number of Row`
arr1[NR] = $1; # read column #1 into arr1
arr2[NR] = $2; # read column #2 into arr2
arr3[NR] = $3; # read column #3 into arr3
}
END { # post processing after reading input file.
for (i = NR; i > 0; i--){ # reverse read the arrays from top to bottom
print arr1[i], arr2[NR + 1 - i], arr3[i]; # orderly output arr1, arr3, but reverse order arr2
}
}
running:
awk -f script.awk input.txt
This might work for you (GNU sort, sed and cat):
sort -k2,2n file |
sed -E 's/^\S+ (\S+).*/s#\\S+#\1#2/' |
cat -n |
sed -Ef - <(sort -k2,2nr file)
Sort column 2 of file in ascending order.
Extract column 2 and turn those values into a sed substitution script.
Apply line numbers to the above script.
Apply the script to the same file sorted by column 2 in descending order.
Same effect using paste:
paste <(sort -k2,2nr file) <(sort -k2,2n file) |
sed -E 's/^(\S+) \S+ (\S+)\t\S+ (\S+) .*/\1 \3 \2/'
I have multiple files and one of the file with 4 lines like below
2345,abdgdhf,......,12879
6354, kfsjgdh,.....,"fac
74573,khskdd,......,5663
gffhf,gfgfhfh,......,7675
I want to write lines where the first field is not digits or the first character of last field is quotation into another file. the expected output should be a file with two lines as below
6354, kfsjgdh,.....,"fac
gffhf,gfgfhfh,......,7675
The command below will print the lines where the first field is not a number
for f in *.csv; do
awk -F "," '(/^[^0-9]/) {print }' "$f" > ./bad/"$f"
done
Output will be
gffhf,gfgfhfh,......,7675
And the command below will give me the fist character of last field
awk -F "," '{print ($(NF))}' <file> |sed 's/\(.\{1\}\).*/\1/'
Output will be
1
"
5
7
I don't know how to merge this line into my for loop and add a condition to only grab lines with quotation as the first character of last field to have first line of 6354, kfsjgdh,.....,"fac in expected output.
You don't need a for loop:
awk -F',' '
FNR==1 { close(out); out="./bad/" FILENAME }
($1 !~ /^[0-9]+$/) || ($NF ~ /^"/) { print > out }
' *.csv
I have a list file, which has id and number and am trying to get those lines from a master file which do not have those ids.
List file
nw_66 17296
nw_67 21414
nw_68 21372
nw_69 27387
nw_70 15830
nw_71 32348
nw_72 21925
nw_73 20363
master file
nw_1 5896
nw_2 52814
nw_3 14537
nw_4 87323
nw_5 56466
......
......
nw_n xxxxx
so far am trying this but not working as expected.
for i in $(awk '{print $1}' list.txt); do grep -v -w $i master.txt; done;
Kindly help
Give this awk one-liner a try:
awk 'NR==FNR{a[$1]=1;next}!a[$1]' list master
Maybe this helps:
awk 'NR == FNR {id[$1]=1;next}
{
if (id[$1] == "") {
print $0
}
}' listfile masterfile
We accept 2 files as input above, first one is listfile, second is masterfile.
NR == FNR would be true while awk is going through listfile. In the associative array id[], all ids in listfile are made a key with value as 1.
When awk goes through masterfile, it only prints a line if $1 i.e. the id is not a key in array ids.
The OP attempted the following line:
for i in $(awk '{print $1}' list.txt); do grep -v -w $i master.txt; done;
This line will not work as for every entry $i, you print all entries in master.txt tat are not equivalent to "$i". As a consequence, you will end up with multiple copies of master.txt, each missing a single line.
Example:
$ for i in 1 2; do grep -v -w "$i" <(seq 1 3); done
2 \ copy of seq 1 3 without entry 1
3 /
1 \ copy of seq 1 3 without entry 2
3 /
Furthermore, the attempt reads the file master.txt multiple times. This is very inefficient.
The unix tool grep allows one the check multiple expressions stored in a file in a single go. This is done using the -f flag. Normally this looks like:
$ grep -f list.txt master.txt
The OP can use this now in the following way:
$ grep -vwf <(awk '{print $1}' list.txt) master.txt
But this would do matches over the full line.
The awk solution presented by Kent is more flexible and allows the OP to define a more tuned match:
awk 'NR==FNR{a[$1]=1;next}!a[$1]' list master
Here the OP clearly states, I want to match column 1 of list with column 1 of master and I don't care about spaces or whatever is in column 2. The grep solution could still match entries in column 2.
I have two files where I need to fetch the last 6 char of Field-11 from F1 and lookup on F2, if it match I need to replace Field-9 of F1 with Field-1 and Filed-2 of F2.
file1:
12345||||||756432101000||756432||756432101000||
aaaaa||||||986754812345||986754||986754812345||
ccccc||||||134567222222||134567||134567222222||
file2:
101000|AAAA
812345|20030
The expected output is:
12345||||||756432101000||101000AAAA ||756432101000||
aaaaa||||||986754812345||81234520030||986754812345||
ccccc||||||134567222222||134567||134567222222||
I have tried:
awk -F '|' -v OFS='|' 'NR==FNR{a[$1,$2];next} {b=substr($11,length($11)-7)} b in a {$9=a[$1,$2]}1'
I'd write it this way as a full script in a file, rather than a one-liner:
#!/usr/bin/awk -f
BEGIN {
FS = "|";
OFS = FS;
}
NR == FNR { # second file: the replacements to use
map[$1] = $2
next;
}
{ # first file specified: the main file to manipulate
b = substr($11,length($11)-5);
if (map[b]) {
$9 = b map[b]
}
print
}
$ awk -F '|' -v OFS='|' 'NR==FNR{a[$1]=$2;next} {b=substr($11,length($11)-5)} b in a {$9=b a[b]}1' file2 file1
12345||||||756432101000||101000AAAA||756432101000||
aaaaa||||||986754812345||81234520030||986754812345||
ccccc||||||134567222222||134567||134567222222||
How it works
awk implicitly loops through every line in both files, starting with file2 because it is specified first on the command line.
-F '|'
This tells awk to use | as the field separator on input
-v OFS='|'
This tells awk to use | as the field separator on output
NR==FNR{a[$1]=$2;next}
While reading the first file, file2, this saves the second field, $2, as the value of associative array a with the first field, $1, as the key.
next tells awk to skip the rest of the commands and start over on the next line.
b=substr($11,length($11)-5)
This extracts the last six characters of field 11 and saves them in variable b.
b in a {$9=b a[b]}
This tests to see if b is one of the keys of associative array a. If it is, this assigns the ninth field, $9, to the combination of b and a[b].
1
This is awk's cryptic shorthand for print-the-line.
You are almost there:
$ awk -F '|' -v OFS='|' 'NR==FNR{a[$1]=$2;next} {b=substr($11,length($11)-5)} b in a {$9=b a[b]}1' file2 file1
12345||||||756432101000||101000AAAA ||756432101000||
aaaaa||||||986754812345||81234520030||986754812345||
ccccc||||||134567222222||134567||134567222222||
$
I have 2 CSV files:
file_1 columns: id,user_id,message_id,rate
file_2 columns: id,type,timestamp
The relation between the files is that file_1.message_id = files_2.id.
I want to create a 3rd file that will have the following columns:
file_1.id,file_1.user_id,file_1.message_id,file_1.rate,file_2.timestamp
Any ideas on how to do this in Linux?
You can use the join command like this:
join -t, -1 3 -2 1 -o 1.1 1.2 1.3 1.4 2.3 <(sort -t, -k 3,3 file1) <(sort file2)
It first sorts the files (file1 is sorted by the 3rd field) and then joins them using the 3rd field of file1 and the 1st field of file2. It then outputs the fields you need.
Seems to be a job for SQLite. Using the SQLite shell:
create table f1(id,user_id,message_id,rate);
create table f2(id,type,timestamp);
.separator ,
.import 'file_1.txt' f1
.import 'file_2.txt' f2
CREATE INDEX i1 ON f1(message_id ASC); -- optional
CREATE INDEX i2 ON f2(id ASC); -- optional
.output 'output.txt'
.separator ,
SELECT f1.id, f1.user_id, f1.message_id, f1.rate, f2.timestamp
FROM f1
JOIN f2 ON f2.id = f1.message_id;
.output stdout
.q
Note that if there is a single error in the number of commas in a single line the import stage will fail. You can prevent the rest of the script from running with .bail on at the script beginning.
If you want unmatched ids you can try:
SELECT f1.* FROM f1 LEFT JOIN f2 on f2.id = f1.message_id WHERE f2.id IS NULL
Which will select every row from f1 for which no corresponding row in f2 has been found.
You can try this:
1. Change all lines to start with the key:
awk -F',' { print $3 " file1 " $1 " " $2 " " $4 } < file1 > temp
awk -F',' { print $1 " file2 " $2 " " $3 } < file2 >> temp
Now the lines look like:
message_id file1 id user_id rate
id file2 type timestamp
Sort temp by the first two columns. Now related lines are adjacent, with file1 first
sort -k 1,1 -k 2,2 < temp > temp2
Run awk to read the lines. In file1 lines save the fields, in file2 lines print them.
With awk you can try something like this -
awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
Test:
[jaypal:~/Temp] cat file_1 # Contents of File_1
id,user_id,message_id,rate
1,3334,424,44
[jaypal:~/Temp] cat file_2 # Contents of File_2
id,type,timestamp
424,rr,22222
[jaypal:~/Temp] awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
[jaypal:~/Temp] cat file_3 # Contents of File_3 made by the script
1,3334,424,44,22222