I have 2 CSV files:
file_1 columns: id,user_id,message_id,rate
file_2 columns: id,type,timestamp
The relation between the files is that file_1.message_id = files_2.id.
I want to create a 3rd file that will have the following columns:
file_1.id,file_1.user_id,file_1.message_id,file_1.rate,file_2.timestamp
Any ideas on how to do this in Linux?
You can use the join command like this:
join -t, -1 3 -2 1 -o 1.1 1.2 1.3 1.4 2.3 <(sort -t, -k 3,3 file1) <(sort file2)
It first sorts the files (file1 is sorted by the 3rd field) and then joins them using the 3rd field of file1 and the 1st field of file2. It then outputs the fields you need.
Seems to be a job for SQLite. Using the SQLite shell:
create table f1(id,user_id,message_id,rate);
create table f2(id,type,timestamp);
.separator ,
.import 'file_1.txt' f1
.import 'file_2.txt' f2
CREATE INDEX i1 ON f1(message_id ASC); -- optional
CREATE INDEX i2 ON f2(id ASC); -- optional
.output 'output.txt'
.separator ,
SELECT f1.id, f1.user_id, f1.message_id, f1.rate, f2.timestamp
FROM f1
JOIN f2 ON f2.id = f1.message_id;
.output stdout
.q
Note that if there is a single error in the number of commas in a single line the import stage will fail. You can prevent the rest of the script from running with .bail on at the script beginning.
If you want unmatched ids you can try:
SELECT f1.* FROM f1 LEFT JOIN f2 on f2.id = f1.message_id WHERE f2.id IS NULL
Which will select every row from f1 for which no corresponding row in f2 has been found.
You can try this:
1. Change all lines to start with the key:
awk -F',' { print $3 " file1 " $1 " " $2 " " $4 } < file1 > temp
awk -F',' { print $1 " file2 " $2 " " $3 } < file2 >> temp
Now the lines look like:
message_id file1 id user_id rate
id file2 type timestamp
Sort temp by the first two columns. Now related lines are adjacent, with file1 first
sort -k 1,1 -k 2,2 < temp > temp2
Run awk to read the lines. In file1 lines save the fields, in file2 lines print them.
With awk you can try something like this -
awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
Test:
[jaypal:~/Temp] cat file_1 # Contents of File_1
id,user_id,message_id,rate
1,3334,424,44
[jaypal:~/Temp] cat file_2 # Contents of File_2
id,type,timestamp
424,rr,22222
[jaypal:~/Temp] awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
[jaypal:~/Temp] cat file_3 # Contents of File_3 made by the script
1,3334,424,44,22222
Related
i have file.txt like this
It is necessary to use a script or command to make it look like this:
i try sort -k2,2nr file.txt it works, but I also change the column in the middle)
maybe you can help me with something, I also know that AWK can work with the column specified in $, but I can't understand how to do it correctly
Suggesting this simplified awk script:
script.awk
{ # read each line from input file, NR is internal variable `Number of Row`
arr1[NR] = $1; # read column #1 into arr1
arr2[NR] = $2; # read column #2 into arr2
arr3[NR] = $3; # read column #3 into arr3
}
END { # post processing after reading input file.
for (i = NR; i > 0; i--){ # reverse read the arrays from top to bottom
print arr1[i], arr2[NR + 1 - i], arr3[i]; # orderly output arr1, arr3, but reverse order arr2
}
}
running:
awk -f script.awk input.txt
This might work for you (GNU sort, sed and cat):
sort -k2,2n file |
sed -E 's/^\S+ (\S+).*/s#\\S+#\1#2/' |
cat -n |
sed -Ef - <(sort -k2,2nr file)
Sort column 2 of file in ascending order.
Extract column 2 and turn those values into a sed substitution script.
Apply line numbers to the above script.
Apply the script to the same file sorted by column 2 in descending order.
Same effect using paste:
paste <(sort -k2,2nr file) <(sort -k2,2n file) |
sed -E 's/^(\S+) \S+ (\S+)\t\S+ (\S+) .*/\1 \3 \2/'
i have shell script and a piece of code has to create a file by replacing column1 values of File1 and filling values from File2.
$ cat File1
CUST01,375
CUST02,379
and:
$ cat File2
CUST01,CUST01,233901
CUST01,CUST01,233902
required output from File2:
375,CUST01,233901
375,CUST01,233902
i have tried below command to fill values from File1 to File2,
awk -F, 'NR==FNR { a[$1]=$2; next } { for(i in a) {for(x=1;x<=NF;x++) {$x=(i==$x)?a[i]:$x } } }1' File1 File2`
and getting below output
375,375,233901
375,375,233902
i want to replace corresponding values in column1 only, instead awk method results replacing values in all columns, any help.
You can try this, but its without error checking:
declare -A dict
while IFS=, read -r key value; do
dict[$key]="$value"
done < file1
while IFS=, read -r col1 col2 col3; do
printf "%s,%s,%s\n" "${dict[$col1]}" "$col2" "$col3"
done < file2
explanation
# create associativ array
declare -A dict
# read key and value from file, separator is ','
while IFS=, read -r key value; do
# write key and value into associativ array
dict[$key]="$value"
done < file1
# loop over file2, read three columns, separator is ','
while IFS=, read -r col1 col2 col3; do
# print columns, first column is the value from associativ array
printf "%s,%s,%s\n" "${dict[$col1]}" "$col2" "$col3"
done < file2
Certainly not an awk expert, but this seems to work for me:
awk -v FS=',' -v OFS=',' 'NR == FNR { a[$1] = $2; next } { $1 = a[$2] }1' File1 File2
With your example input files, this is the output:
375,CUST01,233901
375,CUST01,233902
In the Linux:
there are many .csvs' in the folder, I have to select those csv's file having column name {'PREDICT' = 646}.
check this link:
https://prnt.sc/gone85
what kind of query works?
Providing test data which was unprovided ):
$ cat > file1
ACTUAL PREDICT
1 2
3 646
$ cat > file2
ACTUAL PREDICT
1 2
3 666
Then some GNU awk (nextfile) to select those csv's file having column name {'PREDICT' = 646} or where there is column PREDICT with a value 646:
$ awk 'FNR==1{for(i=1;i<=NF;i++)if($i=="PREDICT")p=i}$p==646{print FILENAME;nextfile}' file1 file2
file1
Explained:
awk '
FNR==1 { # get the column number of PREDICT column for each file
for(i=1;i<=NF;i++)
if($i=="PREDICT")
p=i # set it to p
}
$p==646 { # if p==646, we have a match
print FILENAME # print the filename
nextfile # and move on to the next file
}' file1 file2 # all the candicate files
gnu awk solution without loop:
$ cat tst.awk
BEGIN{FS=","}
FNR==1 && s=substr($0,1,index($0,"PREDICT")) { # look for index of PREDICT
i=sub(/,/, "", s) + 1 # and count nr of times you
# can replace "," in preceding
# substring
}
s && $i==646 { print FILENAME; nextfile }
some input:
$ cat file1.csv
ACTUAL,PREDICT,COUNTRY,REGION,DIVISION,PRODUCTTYPE,PRODUCT,QUARTER,YEAR,MONTH
925,850,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,533,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,646,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
$ cat file2.csv
ACTUAL,PREDICT,COUNTRY,REGION,DIVISION,PRODUCTTYPE,PRODUCT,QUARTER,YEAR,MONTH
925,850,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,533,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,111,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
and:
$ cp file1.csv file3.csv
gives:
$ awk -f tst.awk *.csv
file1.csv
file3.csv
Or use a one-liner:
$ awk -F, 'FNR==1 && s=substr($0,1,index($0,"PREDICT")) {i=sub(/,/, "", s) + 1}s && $i==646 { print FILENAME; nextfile }' *.csv
file1.csv
file3.csv
Tried searching but could not find anything substancial
I have 2 files:
1:
asdfdata:tomatch1:asdffdataaa
asdfdata2:tomatch2:asdffdata33
asdf:tomatch3:asdfx
2:
bek:tomatch1:beke
lek:tomatch3:lekee
wen:tomatch2:wenne
I would like to match by the second clolumn in both, by whatever data is on the line, then take this and print to lines like so:
asdfdata:tomatch1:asdffdataaa:bek:beke
asdfdata2:tomatch2:asdffdata33:wen:wenne
etc.
I imagine awk would be best, Match two files by column line by line - no key it seems kind of similiar to this!
Thank you for any help!!
Use join command like:
join -t":" -1 2 -2 2 <(sort -t":" -k 2 file1.txt) <(sort -t":" -k 2 file2.txt)
Here's how it would work:
-t is for delimeter
-1 - from first file second field delimeted by ":"
-2 - from second file second field delimeted by ":"
join needs input file to be sorted on field which we want to join by hence you see sort command with second field specified with -k option and t option again using delimeter as colon (:) and passed input to join command after sorting the input by second field.
I think this is most simple with join and sort. Assuming bash (for the process substitution):
join -t : -j 2 <(sort -t : -k 2 file1) <(sort -t : -k 2 file2)
Alternatively, with awk (if bash cannot be relied upon and temporary files are not wanted):
awk -F : 'NR == FNR { a[$2] = $0; next } { line = a[$2] FS $1; for(i = 3; i <= NF; ++i) line = line FS $i; print line }' file1 file2
That is
NR == FNR { # while processing the first file
a[$2] = $0 # remember lines by key
next
}
{ # while processing the second file
line = a[$2] FS $1 # append first field to remembered line
# from the first file with the current key
for(i = 3; i <= NF; ++i) { # append all other fields (except the second)
line = line FS $i
}
print line # print result
}
This might work for you (GNU sed):
sed -r 's|(.*)(:.*:)(.*)|/\2/s/$/:\1:\3/|' file2 | sed -f - file1
This constructs a sed script from the file2 to run against file1.
Hi I have the following text and I need to use awk or sed to print 3 separate columns
11/13/14 101 HUDSON AUBONPAINJERSEY CITY NJ $4.15
11/22/14 MTAMVM*110TH ST/CATNEW YORK NY $19.05
11/22/14 DUANE READE #14226 0NEW YORK NY $1.26
So I like to produce a file containing all the dates. Another file containing all the description and third file containing all the numbers
I can use an awk to print the first column printy $1 and then use -F [$] option to print last column but I'm not able to just print the middle column as there are spaces etc. Can I ignore the spaces? or is there a better way of doing this?
Thaking you in advance
Try doing this :
$ awk '
{
print $1 > "dates"; $1=""
print $NF > "prices"; $NF=""
print $0 > "desc"
}
' file
or :
awk -F' +' '
{
print $1 > "dates"
print $2 > "desc"
print $3 > "prices"
}
' file
Then :
$ cat dates
$ cat desc
$ cat prices
Wasn't fast enough to be the first to give an awk solution, so here's one with grep and sed...
grep -o '^.*/.*/1.' file #first col
sed 's/^.*\/.*\/1.//;s/\$.*//' file #middle col
grep -o '\$.*$' file #last col