sort lines with nonuniform fields - linux

I have a text file that contains names like the following:
cup-1.5.1-1_d03.rpm
cup-1.5-2_d03.rpm
cup-1.5.1-2_d03.rpm
cup-1.5-3_d03.rpm
I know that cup-1.5.1-2_d03.rpm should be the latest version. Is there a way to sort it properly so that the output would look like,
cup-1.5-2_d03.rpm
cup-1.5-3_d03.rpm
cup-1.5.1-1_d03.rpm
cup-1.5.1-2_d03.rpm
I tried
$ grep -o -P '(?<=cup-)[0-9]+.[0-9]+.*[0-9]+(?=.*.rpm)' a.txt | tr '.' ' ' | tr '-' ' ' | sort -k4
1 5 2_d03
1 5 3_d03
1 5 1 1_d03
1 5 1 2_d03
but not sure how can I achieve it for the entire string. I have yet another file that has a little different versioning, where the field would be -k5.
$ cat b.txt
dup-1.1.1-3_d03.rpm
dup-1.1.1.1-1_d03.rpm
dup-1.1.1-1_d03.rpm
Here, I would expect an output:
dup-1.1.1-1_d03.rpm
dup-1.1.1-3_d03.rpm
dup-1.1.1.1-1_d03.rpm
Is there a way to sort these two files programmatically without relying on rpm utility if at all possible ?

Use sort -V (version):
sort -V file
cup-1.5-2_d03.rpm
cup-1.5-3_d03.rpm
cup-1.5.1-1_d03.rpm
cup-1.5.1-2_d03.rpm

Related

Filtering on a condition using the column names and not numbers

I am trying to filter a text file with columns based on two conditions. Due to the size of the file, I cannot use the column numbers (as there are thousands and are unnumbered) but need to use the column names. I have searched and tried to come up with multiple ways to do this but nothing is returned to the command line.
Here are a few things I have tried:
awk '($colname1==2 && $colname2==1) { count++ } END { print count }' file.txt
to filter out the columns based on their conditions
and
head -1 file.txt | tr '\t' | cat -n | grep "COLNAME
to try and return the possible column number related to the column.
An example file would be:
ID ad bd
1 a fire
2 b air
3 c water
4 c water
5 d water
6 c earth
Output would be:
2 (count of ad=c and bd=water)
with your input file and the implied conditions this should work
$ awk -v c1='ad' -v c2='bd' 'NR==1{n=split($0,h); for(i=1;i<=n;i++) col[h[i]]=i}
$col[c1]=="c" && $col[c2]=="water"{count++} END{print count+0}' file
2
or you can replace c1 and c2 with the values in the script as well.
to find the column indices you can run
$ awk -v cols='ad bd' 'BEGIN{n=split(cols,c); for(i=1;i<=n;i++) colmap[c[i]]}
NR==1{for(i=1;i<=NF;i++) if($i in colmap) print $i,i; exit}' file
ad 2
bd 3
or perhaps with this chain
$ sed 1q file | tr -s ' ' \\n | nl | grep -E 'ad|bd'
2 ad
3 bd
although may have false positives due to regex match...
You can rewrite the awk to be more succinct
$ awk -v cols='ad bd' '{while(++i<=NF) if(FS cols FS ~ FS $i FS) print $i,i;
exit}' file
ad 2
bd 3
As I mentioned in an earlier comment, the answer at https://unix.stackexchange.com/a/359699/133219 shows how to do this:
awk -F'\t' '
NR==1 {
for (i=1; i<=NF; i++) {
f[$i] = i
}
}
($(f["ad"]) == "c") && ($(f["bd"]) == "water") { cnt++ }
END { print cnt+0 }
' file
2
I'm assuming your input is tab-separated due to the tr '\t' in the command in your question that looks like you're trying to convert tabs to newlines to convert column names to numbers. If I'm wrong and they're just separated by any chains of white space then remove -F'\t' from the above.
Use miller toolkit to manipulate tab-delimited files using column names. Below is a one-liner that filters a tab-delimited file (delimiter is specified using --tsv) and writes the results to STDOUT together with the header. The header is removed using tail and the lines are counted with wc.
mlr --tsv filter '$ad == "c" && $bd == "water"' file.txt | tail -n +2 | wc -l
Prints:
2
SEE ALSO:
miller manual
Note that miller can be easily installed, for example, using conda, like so:
conda create --name miller miller
For years it bugged me there is no succinct way in Unix to do this sort of thing, although miller is a pretty good tool for this. Recently I wrote pick to choose columns by name, and additionally modify, combine and add them by name, as well as filtering rows by clauses using column names. The solution to the above with pick is
pick -h #ad=c #bd=water < data.txt | wc -l
By default pick prints the header of the selected columns, -h is to omit it. To print columns you simply name them on the command line, e.g.
pick ad water < data.txt | wc -l
Pick has many modes, all of them focused on manipulating columns and selecting/filtering rows with a minimal amount of syntax.

Finding the number of specific files via bash

Fill in the dots on the next Unix command, like so, the standard out gives an overview per file type with the number of files in the /dev directory.
In this overview, all filetypes must be listed in descending order of the number of found files of the certain type. If there are filetypes with an equal number of files, they must be listed in alphabetical order.
$ find /dev -ls | …
7 c
6 l
3 d
Tips:
The part already given with the find-command, also finds hidden files in the directory.
With help of the cut-command, you can select a certain part of a line, the two most important options are -f and -d. The first one splits the lines in columns. By default, the tab-character is used. With the option -d you can specify a custom delimiter.
tr, sort and uniq might be useful.
What I have so far:
find /dev -ls | tr \\t " " | tr -s " " | cut -f3 -d ' ' | cut -c-1 | sort | uniq -c | sort -r
But this doesn't seem to work...
Thanks in advance.
I like use awk for this cases instead of tr
find /dev -ls | gawk '{ c=substr($3,1,1) ; x[c]++ } END { for(y in x) print x[y] " " y }' | sort -n

Identify duplicate lines in a file that have 2 fields using linux command line

i have a file composed of 2 fields that contains long list of entries where the first fields is the id.the second field is a counter
what i want is to display the duplicated id
example of the file:
tXXXXXXXXXX 12345
tXXXXXXXXXX 53321
tXXXXXXXXXXXX 422642
i know the logic of how i solve this problem that i need to do an iteration or a loop in the file but i do not know how to write the syntax of the command.
i will appreciate any help
You can use this :
perl -ne '++$i;print $i," ",$_ if $line{$_}++' FILENAME
If you mean you just want a list of duplicate IDs in the file, then this can be easily achieved with cut, sort and uniq.
cat <filename> | cut -f1 -d ' ' | sort | uniq -d
If you want to print all the lines with duplicate IDs on, the below can be used:
FILE=/tmp/sdhjkhsfds ; for dupe in $(cat $FILE | cut -f1 -d ' ' | sort | uniq -d); do cat $FILE | grep $dupe | awk '{print $1, $2}'; done

what is the meaning of delimiter in cut and why in this command it is sorting twice?

I am trying to find the reason of this command and as I know very basic I found that
last | cut -d" " -f 1 | sort | uniq -c | sort
last = Last searches back through the file /var/log/wtmp (or the file designated by the -f flag) and displays a list of all users logged in (and out) since that file was created.
cut is to show the desired column.
The option -d specifies what is the field delimiter that is used in the input file.
-f specifies which field you want to extract
1 is the out put I think which I am not sure
and the it is sorting and then it is
Uniq command is helpful to remove or detect duplicate entries in a file. This tutorial explains few most frequently used uniq command line options that you might find helpful.
If anyone can explain this command and also explain why there is two sorts I will appreciate it.
You are right on your explanation of cut: cut -d" " -f1 (no need of space after f) gets the first field of a stream based on delimiter " " (space).
Then why sort | uniq -c | sort?
From man uniq:
Note: 'uniq' does not detect repeated lines unless they are adjacent.
You may want to sort the input first, or use 'sort -u' without 'uniq'.
Also, comparisons honor the rules specified by 'LC_COLLATE'.
That's why you need to sort the lines before piping to uniq. Finally, as uniq output is not sorted, you need to sort again to see the most repeated items first.
See an example of sort and uniq -c for a given file with repeated items:
$ seq 5 >>a
$ seq 5 >>a
$ cat a
1
2
3
4
5
1
2
3
4
5
$ sort a | uniq -c | sort <--- no repeated matches
2 1
2 2
2 3
2 4
2 5
$ uniq -c a | sort <---- repeated matches
1 1
1 1
1 2
1 2
1 3
1 3
1 4
1 4
1 5
1 5
Note you can do the sort | uniq -c all together with this awk:
last | awk '{a[$1]++} END{for (i in a) print i, a[i]}'
This will store in the a[] array the values of the first column and increase the counter whenever it finds more. In the END{} blocks it prints the results, unsorted, so you could pipe again to sort.
uniq -c is being used to create a frequency histogram. The reason for the second sort is that you are then sorting your histogram by frequency order.
The reason for the first sort is that uniq is only comparing each line to its previous when deciding whether the line is unique or not.

How to count number of unique values of a field in a tab-delimited text file?

I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
What if I wanted a count of these unique values as well?
Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows:
cat input_file | cut -f 1 | sort | uniq
gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
cut -f 1 input_file | sort | uniq
EDIT:
To count the number of unique occurences you can make use of wc command in the chain as:
cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
awk < test.txt '{print $1}' | sort | uniq
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned:
<test.tsv awk '{print $4}' | sort | uniq
Where $4 will be:
$1 - Red
$2 - Ball
$3 - 1
$4 - Sold
# COLUMN is integer column number
# INPUT_FILE is input file name
cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
#!/bin/bash
# Syntax: $0 filename
# The input is assumed to be a .tsv file
FILE="$1"
cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c)
cols=$((cols + 2 ))
i=0
for ((i=1; i < $cols; i++))
do
echo Column $i ::
cut -f $i < "$FILE" | sort | uniq -c
echo
done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66

Resources