replace specific column values using reference of another file - linux

i have shell script and a piece of code has to create a file by replacing column1 values of File1 and filling values from File2.
$ cat File1
CUST01,375
CUST02,379
and:
$ cat File2
CUST01,CUST01,233901
CUST01,CUST01,233902
required output from File2:
375,CUST01,233901
375,CUST01,233902
i have tried below command to fill values from File1 to File2,
awk -F, 'NR==FNR { a[$1]=$2; next } { for(i in a) {for(x=1;x<=NF;x++) {$x=(i==$x)?a[i]:$x } } }1' File1 File2`
and getting below output
375,375,233901
375,375,233902
i want to replace corresponding values in column1 only, instead awk method results replacing values in all columns, any help.

You can try this, but its without error checking:
declare -A dict
while IFS=, read -r key value; do
dict[$key]="$value"
done < file1
while IFS=, read -r col1 col2 col3; do
printf "%s,%s,%s\n" "${dict[$col1]}" "$col2" "$col3"
done < file2
explanation
# create associativ array
declare -A dict
# read key and value from file, separator is ','
while IFS=, read -r key value; do
# write key and value into associativ array
dict[$key]="$value"
done < file1
# loop over file2, read three columns, separator is ','
while IFS=, read -r col1 col2 col3; do
# print columns, first column is the value from associativ array
printf "%s,%s,%s\n" "${dict[$col1]}" "$col2" "$col3"
done < file2

Certainly not an awk expert, but this seems to work for me:
awk -v FS=',' -v OFS=',' 'NR == FNR { a[$1] = $2; next } { $1 = a[$2] }1' File1 File2
With your example input files, this is the output:
375,CUST01,233901
375,CUST01,233902

Related

how can i do reverse sorting using linux commands

i have file.txt like this
It is necessary to use a script or command to make it look like this:
i try sort -k2,2nr file.txt it works, but I also change the column in the middle)
maybe you can help me with something, I also know that AWK can work with the column specified in $, but I can't understand how to do it correctly
Suggesting this simplified awk script:
script.awk
{ # read each line from input file, NR is internal variable `Number of Row`
arr1[NR] = $1; # read column #1 into arr1
arr2[NR] = $2; # read column #2 into arr2
arr3[NR] = $3; # read column #3 into arr3
}
END { # post processing after reading input file.
for (i = NR; i > 0; i--){ # reverse read the arrays from top to bottom
print arr1[i], arr2[NR + 1 - i], arr3[i]; # orderly output arr1, arr3, but reverse order arr2
}
}
running:
awk -f script.awk input.txt
This might work for you (GNU sort, sed and cat):
sort -k2,2n file |
sed -E 's/^\S+ (\S+).*/s#\\S+#\1#2/' |
cat -n |
sed -Ef - <(sort -k2,2nr file)
Sort column 2 of file in ascending order.
Extract column 2 and turn those values into a sed substitution script.
Apply line numbers to the above script.
Apply the script to the same file sorted by column 2 in descending order.
Same effect using paste:
paste <(sort -k2,2nr file) <(sort -k2,2n file) |
sed -E 's/^(\S+) \S+ (\S+)\t\S+ (\S+) .*/\1 \3 \2/'

csv file manipulation in unix and append value to each line

I have the below csv file
,,,Test File,
,todays Date:,01/10/2018,Generation date,10/01/2019 11:20:58
Header 1,Header 2,Header 3,Header 4,Header 5
,My account no,100102GFC,,
A,B,C,D,E
A,B,C,D,E
A,B,C,D,E
TEST
I need to extract the todays date that is in 3rd column of the second line
and also the account number which is in 3rd column of the 4th line.
Below is the new file that i have to create, those extracted values
from 3rd and 4th line needs to be appended at the end of the file.
New file will contain the data from the 4th line and n-1 line
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
Kindly could you please help me how to do the same in a shell script?
Here is what i tried, i am new to shell scripting, unable to combine all these
To extract the date from second row
sed -sn 2p test.csv| cut -d ',' -f 3
To extract the account no
sed -sn 3p test.csv| cut -d ',' -f 3
To extract the actual data
tail -n +5 test.csv | head -n -1>temp.csv
Try awk:
awk -F, 'NR==2{d=$3}NR==4{a=$3}NR>4{if (line) print line; line = $0 "," d "," a;}' Inputfile.csv
Eg:
$ cat file1
,,,Test File,
,todays Date:,01/10/2018,Generation date,10/01/2019 11:20:58
Header 1,Header 2,Header 3,Header 4,Header 5
,My account no,100102GFC,,
A,B,C,D,E
A,B,C,D,E
A,B,C,D,E
TEST
$ awk -F, 'NR==2{d=$3}NR==4{a=$3}NR>4{if (line) print line; line = $0 "," d "," a;}' file1
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
Misunderstood your meaning before I edit your question, updated my answer afterwards.
In the awk command:
NR means the line number, -F to assign separator, d store date a account.
just concatenate the line $0 with d and a.
You don't want last line, so I used line to delay print, last line won't print out (though it did saved to line, and can be used if a END block is given).
You can try Perl also
$ cat dawn.txt
,,,Test File,
,todays Date:,01/10/2018,Generation date,10/01/2019 11:20:58
Header 1,Header 2,Header 3,Header 4,Header 5
,My account no,100102GFC,,
A,B,C,D,E
A,B,C,D,E
A,B,C,D,E
TEST
$ perl -F, -lane ' $dt=$F[2] if $.==2 ; $ac=$F[2] if $.==4; if($.>4 and ! eof) { print "$_,$dt,$ac" } ' dawn.txt
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
$
$ cat tst.awk
BEGIN { FS=OFS="," }
NR == 2 { date = $3 }
NR == 4 { acct = $3 }
NR>4 && NF>1 { print $0, date, acct }
$ awk -f tst.awk file
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
or, depending on your requirements and actual input data:
$ cat tst.awk
BEGIN { FS=OFS="," }
NR == 2 { date = $3 }
NR == 4 { acct = $3 }
NR>4 {
if (out != "") {
print out
}
out = $0 OFS date OFS acct
}
$ awk -f tst.awk file
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC

How to replace fields using substr comparison

I have two files where I need to fetch the last 6 char of Field-11 from F1 and lookup on F2, if it match I need to replace Field-9 of F1 with Field-1 and Filed-2 of F2.
file1:
12345||||||756432101000||756432||756432101000||
aaaaa||||||986754812345||986754||986754812345||
ccccc||||||134567222222||134567||134567222222||
file2:
101000|AAAA
812345|20030
The expected output is:
12345||||||756432101000||101000AAAA ||756432101000||
aaaaa||||||986754812345||81234520030||986754812345||
ccccc||||||134567222222||134567||134567222222||
I have tried:
awk -F '|' -v OFS='|' 'NR==FNR{a[$1,$2];next} {b=substr($11,length($11)-7)} b in a {$9=a[$1,$2]}1'
I'd write it this way as a full script in a file, rather than a one-liner:
#!/usr/bin/awk -f
BEGIN {
FS = "|";
OFS = FS;
}
NR == FNR { # second file: the replacements to use
map[$1] = $2
next;
}
{ # first file specified: the main file to manipulate
b = substr($11,length($11)-5);
if (map[b]) {
$9 = b map[b]
}
print
}
$ awk -F '|' -v OFS='|' 'NR==FNR{a[$1]=$2;next} {b=substr($11,length($11)-5)} b in a {$9=b a[b]}1' file2 file1
12345||||||756432101000||101000AAAA||756432101000||
aaaaa||||||986754812345||81234520030||986754812345||
ccccc||||||134567222222||134567||134567222222||
How it works
awk implicitly loops through every line in both files, starting with file2 because it is specified first on the command line.
-F '|'
This tells awk to use | as the field separator on input
-v OFS='|'
This tells awk to use | as the field separator on output
NR==FNR{a[$1]=$2;next}
While reading the first file, file2, this saves the second field, $2, as the value of associative array a with the first field, $1, as the key.
next tells awk to skip the rest of the commands and start over on the next line.
b=substr($11,length($11)-5)
This extracts the last six characters of field 11 and saves them in variable b.
b in a {$9=b a[b]}
This tests to see if b is one of the keys of associative array a. If it is, this assigns the ninth field, $9, to the combination of b and a[b].
1
This is awk's cryptic shorthand for print-the-line.
You are almost there:
$ awk -F '|' -v OFS='|' 'NR==FNR{a[$1]=$2;next} {b=substr($11,length($11)-5)} b in a {$9=b a[b]}1' file2 file1
12345||||||756432101000||101000AAAA ||756432101000||
aaaaa||||||986754812345||81234520030||986754812345||
ccccc||||||134567222222||134567||134567222222||
$

Match by column and print to line

Tried searching but could not find anything substancial
I have 2 files:
1:
asdfdata:tomatch1:asdffdataaa
asdfdata2:tomatch2:asdffdata33
asdf:tomatch3:asdfx
2:
bek:tomatch1:beke
lek:tomatch3:lekee
wen:tomatch2:wenne
I would like to match by the second clolumn in both, by whatever data is on the line, then take this and print to lines like so:
asdfdata:tomatch1:asdffdataaa:bek:beke
asdfdata2:tomatch2:asdffdata33:wen:wenne
etc.
I imagine awk would be best, Match two files by column line by line - no key it seems kind of similiar to this!
Thank you for any help!!
Use join command like:
join -t":" -1 2 -2 2 <(sort -t":" -k 2 file1.txt) <(sort -t":" -k 2 file2.txt)
Here's how it would work:
-t is for delimeter
-1 - from first file second field delimeted by ":"
-2 - from second file second field delimeted by ":"
join needs input file to be sorted on field which we want to join by hence you see sort command with second field specified with -k option and t option again using delimeter as colon (:) and passed input to join command after sorting the input by second field.
I think this is most simple with join and sort. Assuming bash (for the process substitution):
join -t : -j 2 <(sort -t : -k 2 file1) <(sort -t : -k 2 file2)
Alternatively, with awk (if bash cannot be relied upon and temporary files are not wanted):
awk -F : 'NR == FNR { a[$2] = $0; next } { line = a[$2] FS $1; for(i = 3; i <= NF; ++i) line = line FS $i; print line }' file1 file2
That is
NR == FNR { # while processing the first file
a[$2] = $0 # remember lines by key
next
}
{ # while processing the second file
line = a[$2] FS $1 # append first field to remembered line
# from the first file with the current key
for(i = 3; i <= NF; ++i) { # append all other fields (except the second)
line = line FS $i
}
print line # print result
}
This might work for you (GNU sed):
sed -r 's|(.*)(:.*:)(.*)|/\2/s/$/:\1:\3/|' file2 | sed -f - file1
This constructs a sed script from the file2 to run against file1.

Linux - join 2 CSV files

I have 2 CSV files:
file_1 columns: id,user_id,message_id,rate
file_2 columns: id,type,timestamp
The relation between the files is that file_1.message_id = files_2.id.
I want to create a 3rd file that will have the following columns:
file_1.id,file_1.user_id,file_1.message_id,file_1.rate,file_2.timestamp
Any ideas on how to do this in Linux?
You can use the join command like this:
join -t, -1 3 -2 1 -o 1.1 1.2 1.3 1.4 2.3 <(sort -t, -k 3,3 file1) <(sort file2)
It first sorts the files (file1 is sorted by the 3rd field) and then joins them using the 3rd field of file1 and the 1st field of file2. It then outputs the fields you need.
Seems to be a job for SQLite. Using the SQLite shell:
create table f1(id,user_id,message_id,rate);
create table f2(id,type,timestamp);
.separator ,
.import 'file_1.txt' f1
.import 'file_2.txt' f2
CREATE INDEX i1 ON f1(message_id ASC); -- optional
CREATE INDEX i2 ON f2(id ASC); -- optional
.output 'output.txt'
.separator ,
SELECT f1.id, f1.user_id, f1.message_id, f1.rate, f2.timestamp
FROM f1
JOIN f2 ON f2.id = f1.message_id;
.output stdout
.q
Note that if there is a single error in the number of commas in a single line the import stage will fail. You can prevent the rest of the script from running with .bail on at the script beginning.
If you want unmatched ids you can try:
SELECT f1.* FROM f1 LEFT JOIN f2 on f2.id = f1.message_id WHERE f2.id IS NULL
Which will select every row from f1 for which no corresponding row in f2 has been found.
You can try this:
1. Change all lines to start with the key:
awk -F',' { print $3 " file1 " $1 " " $2 " " $4 } < file1 > temp
awk -F',' { print $1 " file2 " $2 " " $3 } < file2 >> temp
Now the lines look like:
message_id file1 id user_id rate
id file2 type timestamp
Sort temp by the first two columns. Now related lines are adjacent, with file1 first
sort -k 1,1 -k 2,2 < temp > temp2
Run awk to read the lines. In file1 lines save the fields, in file2 lines print them.
With awk you can try something like this -
awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
Test:
[jaypal:~/Temp] cat file_1 # Contents of File_1
id,user_id,message_id,rate
1,3334,424,44
[jaypal:~/Temp] cat file_2 # Contents of File_2
id,type,timestamp
424,rr,22222
[jaypal:~/Temp] awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
[jaypal:~/Temp] cat file_3 # Contents of File_3 made by the script
1,3334,424,44,22222

Resources