I have a csv file that gets generated daily and automatically that has output similar to the following example:
"N","3.5",3,"Bob","10/29/17"
"Y","4.5",5,"Bob","10/11/18"
"Y","5",6,"Bob","10/28/18"
"Y","3",1,"Jim",
"N","4",2,"Jim","09/29/17"
"N","2.5",4,"Joe","01/26/18"
I need to transform the text so that it is grouped by person (the fourth column), and all of the records are in a single row and in the columns are repeated using the same sequence: 1,2,3,5. Some cells may be missing data but must remain in the sequence so the columns line up. So the output I need will look like this:
"Bob","N","3.5",3,"10/29/17","Y","4.5",5,"10/11/18","Y","5",6,"10/28/18"
"Jim","Y","3",1,,"N","4",2,"09/29/17"
"Joe","N","2.5",4,"01/26/18"
I am open to using sed, awk, or pretty much any standard Linux command to get this task done. I've been trying to use awk, and though I get close, I can't figure out how to finish it.
Here is the command where I'm close. It lists the header and the names, but no other data:
awk -F"," 'NR==1; NR>1 {a[$4]=a[$4] ? i : ""} END {for (i in a) {print i}}' test2.csv
you need little more code
$ awk 'BEGIN {FS=OFS=","}
{k=$4; $4=$5; NF--; a[k]=(k in a?a[k] FS $0:$0)}
END {for(k in a) print k,a[k]}' file
"Bob","N","3.5",3,"10/29/17" ,"Y","4.5",5,"10/11/18" ,"Y","5",6,"10/28/18"
"Jim","Y","3",1, ,"N","4",2,"09/29/17"
"Joe","N","2.5",4,"01/26/18"
note that NF-- trick may not work in all awks.
Could you please try following too, reading the Input_file 2 times, it will provide output in same sequence in which 4th column has come in Input_file.
awk '
BEGIN{
FS=OFS=","
}
FNR==NR{
a[$4]=a[$4]?a[$4] OFS $1 OFS $2 OFS $3 OFS $5:$4 OFS $1 OFS $2 OFS $3 OFS $5
next
}
a[$4]{
print a[$4]
delete a[$4]
}
' Input_file Input_file
If there is any chance that any of the CSV values has a comma, then a "CSV-aware" tool will would be advisable to obtain a reliable but straightforward solution.
One approach would be to use one of the many readily available csv2tsv command-line tools. A variety of elegant solutions then becomes possible. For example, one could pipe the CSV into csv2tsv, awk, and tsv2csv.
Here is another solution that uses csv2tsv and jq:
csv2tsv < input.csv | jq -Rrn '
[inputs | split("\t")]
| group_by(.[3])[]
| sort_by(.[2])
| [.[0][3]] + ( map( del(.[3])) | add)
| #csv
'
This produces:
"Bob","N","3.5","3","10/29/17 ","Y","4.5","5","10/11/18 ","Y","5","6","10/28/18 "
"Jim","Y","3","1"," ","N","4","2","09/29/17 "
"Joe","N","2.5","4","01/26/18"
Trimming the excess spaces is left as an exercise :-)
Related
I am looking preferably for a bash/Linux method for the problem below.
I have a text file (input.txt) that looks like so (and many many more lines):
TCCTCCGC+TAGTTAGG_Vel_24_CC_LlanR_34 CC_LlanR
GGAGTATG+TCTATTCG_Vel_24_CC_LlanR_22 CC_LlanR
TTGACTAG+TGGAGTAC_Vel_02_EN_DavaW_11 EN_DavaW
TCGAATAA+TGGTAATT_Vel_24_CC_LlanR_23 CC_LlanR
CTGCTGAA+CGTTGCGG_Vel_02_EN_DavaW_06 EN_DavaW
index_07_barcode_04_PA-17-ACW-04 17-ACW
index_09_barcode_05_PA-17-ACW-05 17-ACW
index_08_barcode_37_PA-21-YC-15 21-YC
index_09_barcode_04_PA-22-GB-10 22-GB
index_10_barcode_37_PA-28-CC-17 28-CC
index_11_barcode_29_PA-32-MW-07 32-MW
index_11_barcode_20_PA-32-MW-08 32-MW
I want to produce a file that looks like
CC_LlanR(TCCTCCGC+TAGTTAGG_Vel_24_CC_LlanR_34,GGAGTATG+TCTATTCG_Vel_24_CC_LlanR_22,TCGAATAA+TGGTAATT_Vel_24_CC_LlanR_23)
EN_DavaW(TTGACTAG+TGGAGTAC_Vel_02_EN_DavaW_11,CTGCTGAA+CGTTGCGG_Vel_02_EN_DavaW_06)
17-ACW(index_07_barcode_04_PA-17-ACW-04,index_09_barcode_05_PA-17-ACW-05)
21-YC(index_08_barcode_37_PA-21-YC-15)
22-GB(index_09_barcode_04_PA-22-GB-10)
28-CC(index_10_barcode_37_PA-28-CC-17)
32-MW(index_11_barcode_29_PA-32-MW-07,index_11_barcode_20_PA-32-MW-08)
I thought that I could do something along the lines of this.
cat input.txt | awk '{print $1}' | grep -e "CC_LlanR" | paste -sd',' > intermediate_file
cat input.txt | awk '{print $2"("}' something something??
But I only know how to grep one pattern at a time? Is there a way to find all the matching lines at once and output them in this format?
Thank you!
(Happy Easter/ long weekend to all!)
With your shown samples please try following.
awk '
FNR==NR{
arr[$2]=(arr[$2]?arr[$2]",":"")$1
next
}
($2 in arr){
print $2"("arr[$2]")"
delete arr[$2]
}
' Input_file Input_file
2nd solution: Within a single read of Input_file try following.
awk '{arr[$2]=(arr[$2]?arr[$2]",":"")$1} END{for(i in arr){print i"("arr[i]")"}}' Input_file
Explanation(1st solution): Adding detailed explanation for 1st solution here.
awk ' ##Starting awk program from here.
FNR==NR{ ##Checking condition FNR==NR which will be TRUE when first time Input_file is being read.
arr[$2]=(arr[$2]?arr[$2]",":"")$1 ##Creating array with index of 2nd field and keep adding its value with comma here.
next ##next will skip all further statements from here.
}
($2 in arr){ ##Checking condition if 2nd field is present in arr then do following.
print $2"("arr[$2]")" ##Printing 2nd field ( arr[$2] ) here.
delete arr[$2] ##Deleteing arr value with 2nd field index here.
}
' Input_file Input_file ##Mentioning Input_file names here.
Assuming your input is grouped by the $2 value as shown in your example (if it isn't then just run sort -k2,2 on your input first) using 1 pass and only storing one token at a time in memory and producing the output in the same order of $2s as the input:
$ cat tst.awk
BEGIN { ORS="" }
$2 != prev {
printf "%s%s(", ORS, $2
ORS = ")\n"
sep = ""
prev = $2
}
{
printf "%s%s", sep, $1
sep = ","
}
END { print "" }
$ awk -f tst.awk input.txt
CC_LlanR(TCCTCCGC+TAGTTAGG_Vel_24_CC_LlanR_34,GGAGTATG+TCTATTCG_Vel_24_CC_LlanR_22)
EN_DavaW(TTGACTAG+TGGAGTAC_Vel_02_EN_DavaW_11)
CC_LlanR(TCGAATAA+TGGTAATT_Vel_24_CC_LlanR_23)
EN_DavaW(CTGCTGAA+CGTTGCGG_Vel_02_EN_DavaW_06)
17-ACW(index_07_barcode_04_PA-17-ACW-04,index_09_barcode_05_PA-17-ACW-05)
21-YC(index_08_barcode_37_PA-21-YC-15)
22-GB(index_09_barcode_04_PA-22-GB-10)
28-CC(index_10_barcode_37_PA-28-CC-17)
32-MW(index_11_barcode_29_PA-32-MW-07,index_11_barcode_20_PA-32-MW-08)
This might work for you (GNU sed):
sed -E 's/^(\S+)\s+(\S+)/\2(\1)/;H
x;s/(\n\S+)\((\S+)\)(.*)\1\((\S+)\)/\1(\2,\4)\3/;x;$!d;x;s/.//' file
Append each manipulated line to the hold space.
Before moving on to the next line, accumlate like keys into a single line.
Delete every line except the last.
Replace the last line by the contents of the hold space.
Remove the first character (newline artefact introduced by H comand) and print the result.
N.B. The final solution is unsorted and in the original order.
I am trying to extract the values/methods which are taking more than particular milliseconds, I am unable to provide correct field separator
awk -F'=' '$3>960' file
awk -F'=||' '$3>960' file
This is a sample line:
logAlias=Overall|logDurationMillis=34|logTimeStart=2019-09-12_05:22:02.602|logTimeStop=2019-09-12_05:22:02.636|logTraceUID=43adbcaf55de|getMethod1=26|getMethod2=0|getMethod3=0|getMethod4=1|getMethod5=8
I do not see any result or i see it gives me all the transactions
Here is a generic, robust and easily extendible way:
awk -F'|' '{
for(i=1;i<=NF;++i) {
split($i,kv,"=")
f[kv[1]]=kv[2]
}
}
f["logDurationMillis"]>960' file
You may use
awk -F'[=|]' '$4>960' file
Note that [=|] is a regex matching either = or | and the value you want to compare against appears in the fourth field.
See online demo:
s="logAlias=Overall|logDurationMillis=34|logTimeStart=2019-09-12_05:22:02.602|logTimeStop=2019-09-12_05:22:02.636|logTraceUID=43adbcaf55de|getMethod1=26|getMethod2=0|getMethod3=0|getMethod4=1|getMethod5=8
logAlias=Overall|logDurationMillis=980|logTimeStart=2019-09-12_05:22:02.602|logTimeStop=2019-09-12_05:22:02.636|logTraceUID=43adbcaf55de|getMethod1=26|getMethod2=0|getMethod3=0|getMethod4=1|getMethod5=8"
awk -F'[=|]' '$4>960' <<< "$s"
Output:
logAlias=Overall|logDurationMillis=980|logTimeStart=2019-09-12_05:22:02.602|logTimeStop=2019-09-12_05:22:02.636|logTraceUID=43adbcaf55de|getMethod1=26|getMethod2=0|getMethod3=0|getMethod4=1|getMethod5=8
Could you please try following, this may help you in case your string is not coming in a fixed place.
awk '
match($0,/logDurationMillis=[0-9]+/){
if(substr($0,RSTART+18,RLENGTH-18)+0>960){
print
}
}
' Input_file
2nd Solution:
awk '
match($0,/logDurationMillis=[0-9]+/){
val=substr($0,RSTART,RLENGTH)
sub(/.*=/,"",val)
if(val+0>960){
print
}
}
' Input_file
Here is how I do it:
awk -F'logDurationMillis=' '{split($2,a,"[|]")} a[1]>960' file
If its log duration logDurationMillis you are looking for, I do set it as a line separator. This way I know for sure the next data is the value to get. Then split the next data by | to get the number in front of it. Then a[1] have your value and you can test it against what you need. No loop, so should be fast.
I have one file which has the below input
1,1_2_34_45.csv,2345
2,1_2_34_45.csv,2345
3,1_2_34_45.csv,2345
4,1_2_34_46.csv,2346
5,1_2_34_47.csv,2345
for that i need the following out put below
1_2_34_45.csv,2345,3
1_2_34_46.csv,2346,1
1_2_34_47.csv,2345,1
I have tried the below code
awk -F , '{a[$2]++ }END{for(i in a){print i,a[i]}}' table.txt > count.txt
but it prints only count,$2 value but the other column details doesnt show up as desired output please help me
Storing the values you want in the array key might be sufficient.
$ awk -F, '{a[$2 FS $3]++} END {for(i in a){print i,a[i]}}' OFS=, input.txt
1_2_34_47.csv,2345,1
1_2_34_46.csv,2346,1
1_2_34_45.csv,2345,3
Note that with an awk script this simple, the output order cannot be guaranteed. (That is, array order is not guaranteed.) If you want to control the order, you'd be best to use an additional array:
$ awk -F, '{k=$2 FS $3} !a[k]++{o[i++]=k} END {for(j=0;j<i;j++){print o[j],a[o[j]]}}' OFS=, input.txt
1_2_34_45.csv,2345,3
1_2_34_46.csv,2346,1
1_2_34_47.csv,2345,1
The second array has an incrementing key that we can step through using a for loop as a counter. The counter preserves the original order of "new" keys in the input stream.
I have a dataset that contains rows of UUIDs followed by locations and transaction IDs. The UUIDs are separated by a semi-colon (';') and the transactions are separated by tabs, like the following:
01234;LOC_1=ABC LOC_1=BCD LOC_2=CDE
56789;LOC_2=DEF LOC_3=EFG
I know all of the location codes in advance. What I want to do is transform this data into a format I can load into SQL/Postgres for analysis, like this:
01234;LOC_1=ABC
01234;LOC_1=BCD
01234;LOC_2=CDE
56789;LOC_2=DEF
56789;LOC_3=EFG
I'm pretty sure I can do this easily using awk (or similar) by looking up location IDs from a file (ex. LOC_1) and matching any instance of the location ID and printing that out next to the UUID. I haven't been able to get it right yet, and any help is much appreciated!
My locations file is named location and my dataset is data. Note that I can edit the original file or write the results to a new file, either is fine.
awk without using split: use semicolon or tab as the field separator
awk -F'[;\t]' -v OFS=';' '{for (i=2; i<=NF; i++) print $1,$i}' file
I don't think you need to match against a known list of locations; you should be able to just print each line as you go:
$ awk '{print $1; split($1,a,";"); for (i=2; i<=NF; ++i) print a[1] ";" $i}' file
01234;LOC_1=ABC
01234;LOC_1=BCD
01234;LOC_2=CDE
56789;LOC_2=DEF
56789;LOC_3=EFG
You comment on knowing the locations and the mapping file makes me suspicious what your example seems to have done isn't exactly what is being asked - but it seems like you're wanting to reformat each set of tab delimited LOC= values into a row with their UUID in front.
If so, this will do the trick:
awk ' BEGIN {OFS=FS=";"} {split($2,locs,"\t"); for (n in locs) { print $1,locs[n]}}'
Given:
$ cat -A data.txt
01234;LOC_1=ABC^ILOC_1=BCD^ILOC_2=CDE$
56789;LOC_2=DEF^ILOC_3=EFG$
Then:
$ awk ' BEGIN {OFS=FS=";"} {split($2,locs,"\t"); for (n in locs) { print $1,locs[n]}}' data.txt
01234;LOC_1=ABC
01234;LOC_1=BCD
01234;LOC_2=CDE
56789;LOC_2=DEF
56789;LOC_3=EFG
The BEGIN {OFS=FS=";"} block sets the input and output delimiter to ;.
For each row, we then split the second field into an array named locs, splitting on tab, via - split($2,locs,"\t")
And then loop through locs printing the UUID and each loc value - for (n in locs) { print $1,locs[n]}
How about without loop or without split one as follows.(considering that Input_file is same as shown samples only)
awk 'BEGIN{FS=OFS=";"}{gsub(/[[:space:]]+/,"\n"$1 OFS)} 1' Input_file
This might work for you (GNU sed):
sed -r 's/((.*;)\S+)\s+(\S+)/\1\n\2\3/;P;D' file
Repeatedly replace the white space between locations with a newline, followed by the UUID and a ;, printing/deleting each line as it appears.
Raw Data:
1234,yes
2234,no
1234,no
5678,yes
I was trying to remove duplicate based on the 2nd column (Yes||No) in which only the one with 'no' will be removed. I saw a lot of solutions that deal with integer, but not with string. So far I came up with this.
awk -F"," '{if($1 in a)
a[$1]=$2=='yes'?$0:a[$1];
else
a[$1]=$0}
END{for(i in a)print a[i]}'
However, what the script does not seem to work, as it removes all the duplicated entry. If anyone can help me, I'd appreciate it.
Your problem is pretty simple : you have forgotten to quote correcty your condition :
awk -F"," '{if($1 in a)a[$1]=$2=="yes"?$0:a[$1];else a[$1]=$0}END{for(i in a)print a[i]}'
You have to write $2=="yes" and not $2=='yes'
Edit:
For not showing ",no" at the final result, you can try this variation (more difficult to adapt your demand on the previous pattern) :
awk -F"," '{if($1 in a){if($2=="yes"){a[$1]=$0;}}else{sub(/,no$/,"",$0);a[$1]=$0}}END{for(i in a)print a[i]}'
For more information about sub(...) function, you can go there.
Glad to help you :)
[ If I understood correctly what you're trying to do... ]
This one-liner removes only "no" lines when value of the first column already appeared either in the prior "yes" line or the prior "no" line.
awk -F"," '{if (!($2=="no" && $1 in a)) {print $0;} a[$1]=$0 }' file.txt