Finding the number of specific files via bash - linux

Fill in the dots on the next Unix command, like so, the standard out gives an overview per file type with the number of files in the /dev directory.
In this overview, all filetypes must be listed in descending order of the number of found files of the certain type. If there are filetypes with an equal number of files, they must be listed in alphabetical order.
$ find /dev -ls | …
7 c
6 l
3 d
Tips:
The part already given with the find-command, also finds hidden files in the directory.
With help of the cut-command, you can select a certain part of a line, the two most important options are -f and -d. The first one splits the lines in columns. By default, the tab-character is used. With the option -d you can specify a custom delimiter.
tr, sort and uniq might be useful.
What I have so far:
find /dev -ls | tr \\t " " | tr -s " " | cut -f3 -d ' ' | cut -c-1 | sort | uniq -c | sort -r
But this doesn't seem to work...
Thanks in advance.

I like use awk for this cases instead of tr
find /dev -ls | gawk '{ c=substr($3,1,1) ; x[c]++ } END { for(y in x) print x[y] " " y }' | sort -n

Related

sort lines with nonuniform fields

I have a text file that contains names like the following:
cup-1.5.1-1_d03.rpm
cup-1.5-2_d03.rpm
cup-1.5.1-2_d03.rpm
cup-1.5-3_d03.rpm
I know that cup-1.5.1-2_d03.rpm should be the latest version. Is there a way to sort it properly so that the output would look like,
cup-1.5-2_d03.rpm
cup-1.5-3_d03.rpm
cup-1.5.1-1_d03.rpm
cup-1.5.1-2_d03.rpm
I tried
$ grep -o -P '(?<=cup-)[0-9]+.[0-9]+.*[0-9]+(?=.*.rpm)' a.txt | tr '.' ' ' | tr '-' ' ' | sort -k4
1 5 2_d03
1 5 3_d03
1 5 1 1_d03
1 5 1 2_d03
but not sure how can I achieve it for the entire string. I have yet another file that has a little different versioning, where the field would be -k5.
$ cat b.txt
dup-1.1.1-3_d03.rpm
dup-1.1.1.1-1_d03.rpm
dup-1.1.1-1_d03.rpm
Here, I would expect an output:
dup-1.1.1-1_d03.rpm
dup-1.1.1-3_d03.rpm
dup-1.1.1.1-1_d03.rpm
Is there a way to sort these two files programmatically without relying on rpm utility if at all possible ?
Use sort -V (version):
sort -V file
cup-1.5-2_d03.rpm
cup-1.5-3_d03.rpm
cup-1.5.1-1_d03.rpm
cup-1.5.1-2_d03.rpm

grep for a substring

I have a file that has the following user names in random places in the file:
albert#ghhdh
albert#jdfjgjjg
john#jfkfeie
mike#fjfkjf
bill#fjfj
bill#fkfkfk
Usernames are the names to the left of the # symbol.
I want to use unix commands to grep the file for usernames, then make a count of unique usernames.
Therefore using the example above, the output should state that there are 4 unique users (I just need the count as the output, no words)
Can someone help me determine the correct count?
You could extract the words before #, sort them and count them :
cat test.txt | cut -d '#' -f 1 | sort | uniq -c
With test.txt :
albert#ghhdh
john#jfkfeie
bill#fjfj
mike#fjfkjf
bill#fkfkfk
albert#jdfjgjjg
It outputs :
2 albert
2 bill
1 john
1 mike
Note that the duplicate usernames don't have to be grouped in the input list.
If you're just interested in the count of uniq users :
cat test.txt | cut -d '#' -f 1 | sort -u | wc -l
# => 4
Or shorter :
cut -d '#' -f 1 test.txt | sort -u | wc -l
Here is the solution that finds the usernames anywhere on the line (not just at the beginning), even if there are multiple usernames on a single line, and finds their unique count:
grep -oE '\b[[:alpha:]_][[:alnum:]_.]*#' file | cut -f1 -d# | sort -u | wc -l
-o only fetches the matched portion
-E processes extended regex
\b[[:alpha:]_][[:alnum:]]*# matches usernames (a string following a word boundary \b that starts with an alpha or underscore followed by zero or more alphanumeric and other permitted characters, ending with a #
cut -f1 -d# extracts the username portion which is then sorted and counted for unique names
Faster with one awk command, if awk is allowed:
awk -F"#" '!seen[$1]++{c++}END{print "Unique users =" c}'
Small Explanation:
using # as delimiter (-F) you look for field 1 = $1 for awk.
For every field 1 that is not seen again we increase a counter c.
In the same time we increase the particular field1 so if found again the test "not seen" will not be valid.
At the end we just print the counter of unique "seen".
As a plus, this solution does not require pre-sorting. Duplicates would be found even if file is not sorted.

Count lines and group by prefix word

I want to count number of lines in a document and group it by the prefix word. Prefix is a set of alphanumeric characters delimited by first underscore. I don't care much about sorting them but it would be nice to list them descending by number of occurrences.
The file looks like this:
prefix1_data1
prefix1_data2_a
differentPrefix_data3
prefix1_data2_b
differentPrefix_data5
prefix2_data4
differentPrefix_data5
The output should be the following:
prefix1 3
differentPrefix 3
prefix2 1
I already did this in python but I am curious if it is possible to do this more efficient using command line or bash script? uniq command has -c and -w options but the length of prefix may vary.
The solution using combination of sed, sort and uniq commands:
sed -rn 's/^([^_]+)_.*/\1/p' testfile | sort | uniq -c
The output:
3 differentPrefix
3 prefix1
1 prefix2
^([^_]+)_ - matches a sub-string(prefix, containing any characters except _) from the start of the string to the first occurrence of underscore _
You could use awk:
awk -F_ '{a[$1]++}END{for(i in a) print i,a[i]}' file
The field separator is set to _.
An array a is filled with all first element, with their associated count.
When the file is parsed the array content is printed
I like RomanPerekhrest's answer. It's more concise. Here is a small change to make it even more concise by using cut in place of sed.
cut -d_ -f1 testfile | sort | uniq -c
Can be done in following manner, testfile is file with contents mentioned above.
printf %-20s%d"\n" prefix1 $(cat testfile|grep "^prefix1" | wc -l)
printf %-20s%d"\n" differentPrefix $(cat testfile|grep "^differentPrefix" | wc -l)
printf %-20s%d"\n" prefix2 $(cat testfile|grep "^prefix2" | wc -l)
so you can check this with your code and check which one's more efficient.

Identify duplicate lines in a file that have 2 fields using linux command line

i have a file composed of 2 fields that contains long list of entries where the first fields is the id.the second field is a counter
what i want is to display the duplicated id
example of the file:
tXXXXXXXXXX 12345
tXXXXXXXXXX 53321
tXXXXXXXXXXXX 422642
i know the logic of how i solve this problem that i need to do an iteration or a loop in the file but i do not know how to write the syntax of the command.
i will appreciate any help
You can use this :
perl -ne '++$i;print $i," ",$_ if $line{$_}++' FILENAME
If you mean you just want a list of duplicate IDs in the file, then this can be easily achieved with cut, sort and uniq.
cat <filename> | cut -f1 -d ' ' | sort | uniq -d
If you want to print all the lines with duplicate IDs on, the below can be used:
FILE=/tmp/sdhjkhsfds ; for dupe in $(cat $FILE | cut -f1 -d ' ' | sort | uniq -d); do cat $FILE | grep $dupe | awk '{print $1, $2}'; done

How to count number of unique values of a field in a tab-delimited text file?

I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
What if I wanted a count of these unique values as well?
Update: I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
You can make use of cut, sort and uniq commands as follows:
cat input_file | cut -f 1 | sort | uniq
gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
cut -f 1 input_file | sort | uniq
EDIT:
To count the number of unique occurences you can make use of wc command in the chain as:
cut -f 1 input_file | sort | uniq | wc -l
awk -F '\t' '{ a[$1]++ } END { for (n in a) print n, a[n] } ' test.csv
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
awk < test.txt '{print $1}' | sort | uniq
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into wc -l
Assuming the data file is actually Tab separated, not space aligned:
<test.tsv awk '{print $4}' | sort | uniq
Where $4 will be:
$1 - Red
$2 - Ball
$3 - 1
$4 - Sold
# COLUMN is integer column number
# INPUT_FILE is input file name
cut -f ${COLUMN} < ${INPUT_FILE} | sort -u | wc -l
Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
#!/bin/bash
# Syntax: $0 filename
# The input is assumed to be a .tsv file
FILE="$1"
cols=$(sed -n 1p $FILE | tr -cd '\t' | wc -c)
cols=$((cols + 2 ))
i=0
for ((i=1; i < $cols; i++))
do
echo Column $i ::
cut -f $i < "$FILE" | sort | uniq -c
echo
done
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
#!/bin/bash
awk '
(NR==1){
for(fi=1; fi<=NF; fi++)
fname[fi]=$fi;
}
(NR!=1){
for(fi=1; fi<=NF; fi++)
arr[fname[fi]][$fi]++;
}
END{
for(fi=1; fi<=NF; fi++){
out=fname[fi];
for (item in arr[fname[fi]])
out=out"\t"item"_"arr[fname[fi]][item];
print(out);
}
}
' $1
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
isRef A_15 C_42 G_24 T_18
isCar YEA_10 NO_40 NA_50
isTv FALSE_33 TRUE_66

Resources