can't make pipe operator function properly - linux - linux

I'm trying to get the second column of a file, get the first 10 results and sort it in alphanumerical order but it doesn't seem to work.
cut -f2 file.txt | head -10 | sort -d
I get this output:
NM_000242
NM_000525
NM_001005850
NM_001136557
NM_001204426
NM_001204836
NM_001271762
NM_001287216
NM_006952
NM_007253
If I sort the file first and get the first 10 lines of the sorted file it works
cut -f2 refGene.txt | sort -d | head -10
I get this output:
NM_000014
NM_000015
NM_000016
NM_000017
NM_000018
NM_000019
NM_000020
NM_000021
NM_000022
NM_000023
I don't want to sort the file and get the sorted result, I'd like to get the first 10 lines first and then sort them in alphanumerical order. What did I miss here?
Thanks

Well, it works correctly NM_000525 is before NM_001005850, and the later is before NM_00695.
But if you need to sort the second part (after the _) numerically, then you can do:
cut -f2 file.txt | head -10 | sort -t_ -k1,1 | sort -s -t_ -k2 -n
-s is a stable sort
Assuming the format is the same in the whole file (two letters _ numbers)
EDIT: Even shorter version would be:
cut -f2 file.txt | head -10 | sort -t_ -k1,1 -k2n
Explanation:
-t_ use _ as separator of fields (for selection on which field to sort)
-k1,1 sort alphabetically from first field (without ,1 it would sort also the second field)
-k2n sort numerically on the second field
So first it will sort by first field (using alphanumeric sorting) and then using the second field (using numeric, so it will convert string to a number and sort that)

Related

Linux sort numerically based on first column

I'm trying to numerically sort a long list of csv file based on the number in the first column, using below command:
-> head -1 file.csv ; tail -n +2 file.csv | sort -t , -k1n
(I'm piping head/tail command to skip the first line of the file, as it's a header and contains string)
However, it doesn't return a fully sorted list. Half of it is sorted, the other half is like this:
9838,2361,8,947,2284
9842,2135,2,261,2511
9846,2710,1,176,2171
986,2689,32,123,2177
9888,2183,15,30,2790
989,2470,33,887,2345
Can somebody tell me what I'm doing wrong? I've also tried below with same result:
-> sort -k1n -t"," file.csv
tail -n +2 file.csv | sort -k1,2 -n -t"," should do the trick.
To perform a numeric sort by the first column use the following approach:
tail -n +2 /file.csv | sort -n -t, -k1,1
The output:
986,2689,32,123,2177
989,2470,33,887,2345
9838,2361,8,947,2284
9842,2135,2,261,2511
9846,2710,1,176,2171
9888,2183,15,30,2790
-k pos1[,pos2]
Specify a sort field that consists of the part of the line between pos1 and pos2
(or the end of the line, if pos2 is omitted), inclusive.
In its simplest form pos specifies a field number (starting with 1) ...

Identify duplicate lines in a file that have 2 fields using linux command line

i have a file composed of 2 fields that contains long list of entries where the first fields is the id.the second field is a counter
what i want is to display the duplicated id
example of the file:
tXXXXXXXXXX 12345
tXXXXXXXXXX 53321
tXXXXXXXXXXXX 422642
i know the logic of how i solve this problem that i need to do an iteration or a loop in the file but i do not know how to write the syntax of the command.
i will appreciate any help
You can use this :
perl -ne '++$i;print $i," ",$_ if $line{$_}++' FILENAME
If you mean you just want a list of duplicate IDs in the file, then this can be easily achieved with cut, sort and uniq.
cat <filename> | cut -f1 -d ' ' | sort | uniq -d
If you want to print all the lines with duplicate IDs on, the below can be used:
FILE=/tmp/sdhjkhsfds ; for dupe in $(cat $FILE | cut -f1 -d ' ' | sort | uniq -d); do cat $FILE | grep $dupe | awk '{print $1, $2}'; done

Uniq skipping middle part of the line when comparing lines

Sample file
aa\bb\cc\dd\ee\ff\gg\hh\ii\jj
aa\bb\cc\dd\ee\ll\gg\hh\ii\jj
aa\bb\cc\dd\ee\ff\gg\hh\ii\jj
I want to skip 6th field 'ff' when comparing for an unique line, also I want the count of # of duplicate lines in front.
I tried this, without any luck:
sort -t'\' -k1,5 -k7 --unique xslin1 > xslout
Expected output
3 aa\bb\cc\dd\ee\*\gg\hh\ii\jj
$ awk -F'\' -v OFS='\' '{$6="*"} 1' xslin1 | sort | uniq -c
3 aa\bb\cc\dd\ee\*\gg\hh\ii\jj
Discussion
With --unique, sort outputs only unique lines but it does not count them. One needs uniq -c for that. Further, sort outputs all unique lines, not just those that sort to the same value.
The above solution does the simple approach of assigning the sixth field to *, as you wanted in the output, and then uses the standard pipeline, sort | uniq -c, to produce the count of unique lines.
You can do this in one awk:
awk 'BEGIN{FS=OFS="\\"} {$6="*"} uniq[$0]++{}
END {for (i in uniq) print uniq[i] "\t" i}' file
3 aa\bb\cc\dd\ee\*\gg\hh\ii\jj

BASH order strings by the last "fields" (after "/" symbol")

I'm looking for a method to sort line alphabetically by their last "field".
So:
if my output is (maybe by a grep command):
mike/downloads.png
mike/public/system.png
mike/root/alphabet.png
the result should be:
root/alphabet.png
downloads.png
public/system.png
beacuse "alphabet" , "downloads" and "system" are order alphabetically.
should I firts cut and sort them with " cut -f2 -d"/" | sort " ? and then merge the rest of the path?
or there is an easier way?
Any helps will be appreciated.
Thanks
(example modified)
Sort has a -t parameter to specify the field delimiter, and -k to specify the field to sort on so you can write:
sort -t/ -k 3
Thank you all! I have finally found what i was looking for
first
awk -F'/' '{print $NF,$0}'
then
sort
and finally
sed -n 's/[^/]*\///p'
and the output will be
folder/file.png
file.png
folder/folder2/file.png
As the number of fields is dynamic you could append the last field to the start of the line before sorting and remove it after:
$ awk -F'/' '{print $NF,$0}' file | sort | awk '{print $2}'
mike/root/alphabet.png
mike/downloads.png
mike/public/system.png
This specifies the third field with a field delimiter of /
sort -t'/' -k 3

How to count number of unique values of a field in a tab-delimited text file?

I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
What if I wanted a count of these unique values as well?
Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows:
cat input_file | cut -f 1 | sort | uniq
gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
cut -f 1 input_file | sort | uniq
EDIT:
To count the number of unique occurences you can make use of wc command in the chain as:
cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
awk < test.txt '{print $1}' | sort | uniq
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned:
<test.tsv awk '{print $4}' | sort | uniq
Where $4 will be:
$1 - Red
$2 - Ball
$3 - 1
$4 - Sold
# COLUMN is integer column number
# INPUT_FILE is input file name
cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
#!/bin/bash
# Syntax: $0 filename
# The input is assumed to be a .tsv file
FILE="$1"
cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c)
cols=$((cols + 2 ))
i=0
for ((i=1; i < $cols; i++))
do
echo Column $i ::
cut -f $i < "$FILE" | sort | uniq -c
echo
done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66

Resources