Remove duplicates, but keeping only the last occurrence in linux file [duplicate] - linux
This question already has answers here:
Eliminate partially duplicate lines by column and keep the last one
(4 answers)
Closed 6 years ago.
INPUT FILE :
5,,OR1,1000,Nawras,OR,20160105T05:30:17+0400,20181231T23:59:59+0400,,user,,aaa8016058f008ddceae6329f0c5d551,50293277591,,,30001,C
5,,OR1,1000,Nawras,OR,20160105T05:30:17+0400,20181231T23:59:59+0400,20160217T01:45:18+0400,,user,aaa8016058f008ddceae6329f0c5d551,50293277591,,,30001,H
5,,OR2,2000,Nawras,OR,20160216T06:30:18+0400,20191231T23:59:59+0400,,user,,f660818af5625b3be61fe12489689601,50328589469,,,30002,C
5,,OR2,2000,Nawras,OR,20160216T06:30:18+0400,20191231T23:59:59+0400,20160216T06:30:18+0400,,user,f660818af5625b3be61fe12489689601,50328589469,,,30002,H
5,,OR1,1000,Nawras,OR,20150328T03:00:13+0400,20171230T23:59:59+0400,,user,,22bf18b024e1d4f42ac79943062cf576,50212935879,,,10001,C
5,,OR1,1000,Nawras,OR,20150328T03:00:13+0400,20171230T23:59:59+0400,20150328T03:00:13+0400,,user,22bf18b024e1d4f42ac79943062cf576,50212935879,,,10001,H
0,,OR5,5000,Nawras,OR,20160421T02:45:16+0400,20191231T23:59:59+0400,,user,,c7c501ac92d85a04bb26c575929e9317,50329769192,,,11001,C
0,,OR5,5000,Nawras,OR,20160421T02:45:16+0400,20191231T23:59:59+0400,20160421T02:45:16+0400,,user,c7c501ac92d85a04bb26c575929e9317,50329769192,,,11001,H
0,,OR1,1000,Nawras,OR,20160330T02:00:14+0400,20181231T23:59:59+0400,,user,,d4ea749306717ec5201d264fc8044201,50285524333,,,11001,C
DESIRED OUTPUT :
5,,OR1,1000,UY,OR,20160105T05:30:17+0400,20181231T23:59:59+0400,20160217T01:45:18+0400,,user,aaa8016058f008ddceae6329f0c5d551,50293277591,,,30001,H
5,,OR2,2000,UY,OR,20160216T06:30:18+0400,20191231T23:59:59+0400,20160216T06:30:18+0400,,user,f660818af5625b3be61fe12489689601,50328589469,,,30002,H
5,,OR1,1000,UY,OR,20150328T03:00:13+0400,20171230T23:59:59+0400,20150328T03:00:13+0400,,user,22bf18b024e1d4f42ac79943062cf576,50212935879,,,10001,H
0,,OR5,5000,UY,OR,20160421T02:45:16+0400,20191231T23:59:59+0400,20160421T02:45:16+0400,,user,c7c501ac92d85a04bb26c575929e9317,50329769192,,,11001,H
0,,OR1,1000,UY,OR,20160330T02:00:14+0400,20181231T23:59:59+0400,,user,,d4ea749306717ec5201d264fc8044201,50285524333,,,11001,C*
CODE USED :
for i in `cat file | awk -F, '{print $13}' | sort | uniq`
do
grep $i file | tail -1 >> TESTINGGGGGGG_SV
done
This took much time as the file has 300 million records and which has 65 million uniq records at 13th column .
So i would require a output which can traverse 13th column value - last occurrence in file as the output .
awk to the rescue!
awk -F, 'p!=$13 && p0 {print p0} {p=$13; p0=$0} END{print p0}' file
expects sorted input.
Please post the timing if you can successfully run the script.
If sorting is not possible, another option is
tac file | awk -F, '!a[$13]++' | tac
reverse the file, take the first entry for $13 and reverse the results back.
Here's a solution that should work:
awk -F, '{rows[$13]=$0} END {for (i in rows) print rows[i]}' file
Explanation:
rows is an associative array indexed by field 13 $13, the element of the array indexed by $13 gets overwritten every time there's a duplicate of field 13; its value is the whole line $0.
But this is inefficient in terms of memory because of the space needed to save the array.
An improvement to the above solution that's still not using sorting is to just save the line numbers in the associative array:
awk -F, '{rows[$13]=NR}END {for(i in rows) print rows[i]}' file|while read lN; do sed "${lN}q;d" file; done
Explanation:
rows as before but the values are the line numbers and not the whole lines
awk -F, '{rows[$13]=NR}END {for(i in rows) print rows[i]}' file outputs a list of row numbers containing the sought lines
sed "${lN}q;d" fetches line number lN from file
Related
How to read a .csv file with shell command? [duplicate]
This question already has answers here: Bash: Parse CSV with quotes, commas and newlines (10 answers) Closed 2 years ago. I have a .csv file which I need to extract values from. It is formatted like this : First line of the file (no data) 1;Jack;Daniels;Madrid;484016; 2;Alice;Morgan;London;564127; etc... I would need a shell command that read all lines of a specific column within a .csv, compare each with a string and return a value whenever it finds a matching line. In Java i would define it something like : > boolean findMatchInCSV(String valueToFind, int colNumber, String > colSeparator) The separator between columns may indeed change that is why I would like a something quite generic if possible :) But I need it as a shell command, is that possible ? Thanks
I would need a shell command that read all lines cat 1.csv # read the file of a specific column within a .csv cat 1.csv | cut -f5 -d';' # keep only the field #5 (use ';' as separator) compare each with a string # keep only the row where the value of the field is exactly 'foo' cat 1.csv | cut -f5 -d';' | grep '^foo$' return a value whenever it finds a matching line. This last one request is unclear. The code above displays the searched string (foo) once for each row where it is the value of column #5 (start counting from 1). The columns are separated by ;. Unfortunately, it doesn't handle quoted strings. If the value in any field contains the separator (;), the CSV format allows enclosing the field value into double quotes (") to prevent the separator character be interpreted as a separator (forcing its literal value).
I assume you're looking for something like FILE=data.csv VALUE="$1" COLNUM=$2 IFS="$3" while read -r -a myArray do if "$myArray[$COLNUM]"=="$VALUE"; then exit 0 fi done < tail -n +2 $FILE exit 1
grep "my_string" file |awk -F ";" '{print $5}' or awk -F ";" '/my_string/ {print $5}' file For 2nd column: awk -F ";" '$2 ~ /my_string/ {print $5}' file For exact matching: awk -F ";" '$2 == "my_string" {print $5}' file
Uniq skipping middle part of the line when comparing lines
Sample file aa\bb\cc\dd\ee\ff\gg\hh\ii\jj aa\bb\cc\dd\ee\ll\gg\hh\ii\jj aa\bb\cc\dd\ee\ff\gg\hh\ii\jj I want to skip 6th field 'ff' when comparing for an unique line, also I want the count of # of duplicate lines in front. I tried this, without any luck: sort -t'\' -k1,5 -k7 --unique xslin1 > xslout Expected output 3 aa\bb\cc\dd\ee\*\gg\hh\ii\jj
$ awk -F'\' -v OFS='\' '{$6="*"} 1' xslin1 | sort | uniq -c 3 aa\bb\cc\dd\ee\*\gg\hh\ii\jj Discussion With --unique, sort outputs only unique lines but it does not count them. One needs uniq -c for that. Further, sort outputs all unique lines, not just those that sort to the same value. The above solution does the simple approach of assigning the sixth field to *, as you wanted in the output, and then uses the standard pipeline, sort | uniq -c, to produce the count of unique lines.
You can do this in one awk: awk 'BEGIN{FS=OFS="\\"} {$6="*"} uniq[$0]++{} END {for (i in uniq) print uniq[i] "\t" i}' file 3 aa\bb\cc\dd\ee\*\gg\hh\ii\jj
Bash- sum values from an array in one line
I have this array: array=(1 2 3 4 4 3 4 3) I can get the largest number with: echo "num: $(printf "%d\n" ${array[#]} | sort -nr | head -n 1)" #outputs 4 But i want to get all 4's add sum them up, meaning I want it to output 12 (there are 3 occurrences of 4) instead. any ideas?
dc <<<"$(printf '%d\n' "${array[#]}" | sort -n | uniq -c | tail -n 1) * p" sort to get max value at end uniq -c to get only unique values, with a count of how many times they appear tail to get only the last line (with the max value and its count) dc to multiply the value by the count I picked dc for the multiplication step because it's RPN, so you don't have to split up the uniq -c output and insert anything in the middle of it - just add stuff to the end.
Using awk: $ printf "%d\n" "${array[#]}" | sort -nr | awk 'NR>1 && p!=$0{print x;exit;}{x+=$0;p=$0;}' 12 Using sort, the numbers are sorted(-n) in reverse(-r) order, and the awk keeps summing the numbers till it finds a number which is different from the previous one.
You can do this with awk: awk -v RS=" " '{sum[$0]+=$0; if($0>max) max=$0} END{print sum[max]}' <<<"${array[#]}" Setting RS (record separator) to space allows you to read your array entries as separate records. sum[$0]+=$0; means sum is a map of cumulative sums for each input value; if($0>max) max=$0 calculates the max number seen so far; END{print sum[max]} prints the sum for the larges number seen at the end. <<<"${array[#]}" is a here-document that allows you to feed a string (in this case all elements of the array) as stdin into awk. This way there is no piping or looping involved - a single command does all the work.
Using only bash: echo $((${array// /+})) Replace all spaces with plus, and evaluate using double-parentheses expression.
How to cut first n and last n columns?
How can I cut off the first n and the last n columns from a tab delimited file? I tried this to cut first n column. But I have no idea to combine first and last n column cut -f 1-10 -d "<CTR>v <TAB>" filename
Cut can take several ranges in -f: Columns up to 4 and from 7 onwards: cut -f -4,7- or for fields 1,2,5,6 and from 10 onwards: cut -f 1,2,5,6,10- etc
The first part of your question is easy. As already pointed out, cut accepts omission of either the starting or the ending index of a column range, interpreting this as meaning either “from the start to column n (inclusive)” or “from column n (inclusive) to the end,” respectively: $ printf 'this:is:a:test' | cut -d: -f-2 this:is $ printf 'this:is:a:test' | cut -d: -f3- a:test It also supports combining ranges. If you want, e.g., the first 3 and the last 2 columns in a row of 7 columns: $ printf 'foo:bar:baz:qux:quz:quux:quuz' | cut -d: -f-3,6- foo:bar:baz:quux:quuz However, the second part of your question can be a bit trickier depending on what kind of input you’re expecting. If by “last n columns” you mean “last n columns (regardless of their indices in the overall row)” (i.e. because you don’t necessarily know how many columns you’re going to find in advance) then sadly this is not possible to accomplish using cut alone. In order to effectively use cut to pull out “the last n columns” in each line, the total number of columns present in each line must be known beforehand, and each line must be consistent in the number of columns it contains. If you do not know how many “columns” may be present in each line (e.g. because you’re working with input that is not strictly tabular), then you’ll have to use something like awk instead. E.g., to use awk to pull out the last 2 “columns” (awk calls them fields, the number of which can vary per line) from each line of input: $ printf '/a\n/a/b\n/a/b/c\n/a/b/c/d\n' | awk -F/ '{print $(NF-1) FS $(NF)}' /a a/b b/c c/d
You can cut using following , -d: delimiter ,-f for fields \t used for tab separated fields cut -d$'\t' -f 1-3,7-
To use AWK to cut off the first and last fields: awk '{$1 = ""; $NF = ""; print}' inputfile Unfortunately, that leaves the field separators, so aaa bbb ccc becomes [space]bbb[space] To do this using kurumi's answer which won't leave extra spaces, but in a way that's specific to your requirements: awk '{delim = ""; for (i=2;i<=NF-1;i++) {printf delim "%s", $i; delim = OFS}; printf "\n"}' inputfile This also fixes a couple of problems in that answer. To generalize that: awk -v skipstart=1 -v skipend=1 '{delim = ""; for (i=skipstart+1;i<=NF-skipend;i++) {printf delim "%s", $i; delim = OFS}; printf "\n"}' inputfile Then you can change the number of fields to skip at the beginning or end by changing the variable assignments at the beginning of the command.
You can use Bash for that: while read -a cols; do echo ${cols[#]:0:1} ${cols[#]:1,-1}; done < file.txt
you can use awk, for example, cut off 1st,2nd and last 3 columns awk '{for(i=3;i<=NF-3;i++} print $i}' file if you have a programing language such as Ruby (1.9+) $ ruby -F"\t" -ane 'print $F[2..-3].join("\t")' file
Try the following: echo a#b#c | awk -F"#" '{$1 = ""; $NF = ""; print}' OFS=""
Use cut -b COLUMN_N_BEGINS-COLUMN_N_UNTIL INPUT.TXT > OUTPUT.TXT -f doesn't work if you have "tabs" in the text file.
How to count number of unique values of a field in a tab-delimited text file?
I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example, Red Ball 1 Sold Blue Bat 5 OnSale ............... So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column. I need to do this in a Linux command line, so probably using some bash script, sed, awk or something. What if I wanted a count of these unique values as well? Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows: cat input_file | cut -f 1 | sort | uniq gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2. Avoiding UUOC :) cut -f 1 input_file | sort | uniq EDIT: To count the number of unique occurences you can make use of wc command in the chain as: cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column awk < test.txt '{print $1}' | sort | uniq As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned: <test.tsv awk '{print $4}' | sort | uniq Where $4 will be: $1 - Red $2 - Ball $3 - 1 $4 - Sold
# COLUMN is integer column number # INPUT_FILE is input file name cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq. #!/bin/bash # Syntax: $0 filename # The input is assumed to be a .tsv file FILE="$1" cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c) cols=$((cols + 2 )) i=0 for ((i=1; i < $cols; i++)) do echo Column $i :: cut -f $i < "$FILE" | sort | uniq -c echo done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script. Code #!/bin/bash awk ' (NR==1){ for(fi=1; fi<=NF; fi++) fname[fi]=$fi; } (NR!=1){ for(fi=1; fi<=NF; fi++) arr[fname[fi]][$fi]++; } END{ for(fi=1; fi<=NF; fi++){ out=fname[fi]; for (item in arr[fname[fi]]) out=out"\t"item"_"arr[fname[fi]][item]; print(out); } } ' $1 Execution Example: bash> ./script.sh <path to tab-delimited file> Output Example isRef A_15 C_42 G_24 T_18 isCar YEA_10 NO_40 NA_50 isTv FALSE_33 TRUE_66