Using awk and sorting [duplicate] - linux

This question already has answers here:
How to sort a file, based on its numerical values for a field?
(8 answers)
Closed 4 years ago.
I have a file that contains names and numbers like so:
students.txt:
Student A F 40 50 60
Student B F 50 60 70
Student C M 60 70 80
Student D M 100 90 90
Student E F 80 90 100
Student F M 20 30 40
Student G M 30 40 50
I want to sort these names using awk, and sort by the last number on a line.
When I try
sort -k6 students.txt | awk '{print}'
The output that is given to me is
... 100
... 40
... 50
... 60
... 70
... 80
... 90
As a result, it is mostly sorted except the first one. Is there a reason why 100 is at the start of the output rather than at the end?

You need to use numeric sort, via the -n flag. From the sort(1) man page:
-n, --numeric-sort
compare according to string numerical value
Result:
$ sort -n -k6 students.txt
Student F M 20 30 40
Student G M 30 40 50
Student A F 40 50 60
Student B F 50 60 70
Student C M 60 70 80
Student D M 100 90 90
Student E F 80 90 100

Related

Separating lines with multiple values in one cell to individual lines in excel [duplicate]

This question already has answers here:
Unnest (explode) a Pandas Series
(8 answers)
Closed 2 years ago.
I have a data set (csv file) of names that list names with number of people with that name, their "rank" and the name itself.
I am looking for a way to separate all the names into single lines ideally in excel - but maybe something in pandas is an option.
The problem is that many of the lines contain multiple names comma separated.
the data looks like this.
rank | number of occurrences | name
1 | 10000 | marie
2 | 9999 | sophie
3 | 9998 | ellen
...
...
50 | 122 | jude, allan, jaspar
I would like to have each name on an individual line alongside its correspondent number of occurrences. Its fine that the rank is duplicated.
Something like this
rank | number of occurrences | name
1 | 10000 | marie
2 | 9999 | sophie
3 | 9998 | ellen
..
...
50 | 122 | jude
50 | 122 | allan
50 | 122 | jaspar
Use df.explode()
df.assign(name=(df.name.str.split(','))).explode('name')
Way it works
df.name=# Equivalent of df.assign(name=
df.name.str.split(',')#puts the names in list
df.explode('name')# Disintegrates the multiple names into one per row
rank number of occurrences name
0 1 10000 marie
1 2 9999 sophie
2 3 9998 ellen
3 50 122 jude
3 50 122 allan
3 50 122 jaspar
In [60]: df
Out[60]:
rank no name
0 50 122 jude, allan, jaspar
In [61]: df.assign(name=df['name'].str.split(',')).explode('name')
Out[61]:
rank no name
0 50 122 jude
0 50 122 allan
0 50 122 jaspar

Datamash: Transposing the column into rows based on group in bash

I have a tab delim file with a 2 columns like following
A 123
A 23
A 45
A 67
B 88
B 72
B 50
B 23
C 12
C 14
I want to transpose with the above data based on the first column like following
A 123 23 45 67
B 88 72 50 23
C 12 14
I tried the datamash transpose < input-file.txt but it didnt yield the output as expected.
One awk version:
awk '{printf ($1!=f?"\n%s":" "$2),$0;f=$1}' file
A 123 23 45 67
B 88 72 50 23
C 12 14
With this version, you get on blank line, but should be fast and handle large data since no loop or array variable are used.
$1!=f?"\n%s":" "$2),$0 If first field is not equal f, print new line and all fields
if $1 = f, only print field 2.
f=$1 set f to first field
datamash --group=1 --field-separator=' ' collapse 2 <file | tr ',' ' '
Output:
A 123 23 45 67
B 88 72 50 23
C 12 14
Input must be sorted, as in the question.
This might work for you (GNU sed):
sed -E ':a;N;s/^((\S+)\s+.*)\n\2/\1/;ta;P;D' file
Append the next line and if the first field of the first line is the same as the first field of the second line, remove the newline and the first field of the second line. Print the first line in the pattern space and then delete it and the following newline and repeat.

Print the count of files in a specific format iteratively in shell script

I have the following folder structure:
A/B/C/D/E/00
A/B/C/D/E/01
.
.
A/B/C/D/E/23
Similarly,
M/N/O/P/Q/00
M/N/O/P/Q/01
.
.
M/N/O/P/Q/23
Now, each folder from 00 to 23 has many files inside, which I would like to count.
If I run this simple command:
ls /A/B/C/D/E/00 | wc -l
I can get the count of files in each of these sub directories. I want to automate this or get it iteratively. Can anyone suggest a way?
Also, the final output I am looking at is a file that should look like this:
C E RESULT OF ls /A/B/C/D/E/00 | wc -l RESULT OF ls /A/B/C/D/E/01 | wc -l
M Q RESULT OF ls /M/N/O/P/Q/00 | wc -l RESULT OF ls /M/N/O/P/Q/01 | wc -l
So, the output should look like this finally
C E 23 23 4 6 7 4 76 98 57 2 67 9 12 34 67 0 2 3 78 98 12 3 57 213
M Q 12 10 2 34 32 1 35 65 87 8 32 2 65 87 98 0 4 12 1 35 34 76 9 67
Please note, the values after the alphabets are the values of file counts of the 24 folders 00, 01 through 23.
Using the eval approach: I can hardcode and get the exact results. But, I wanted it in a way that would show me the data for the previous day. So this is what I did:
d=`date --date ="1 days ago" +%Y%m%d`
month= `date +%Y%m`
eval echo YZ $d '"$(ls "/A/B/YZ/$month/$d/"'{20150800..20150823})'| wc -l)"'
This works perfectly because in the given location there are files inside child directories 20150800,20150801..20150823. However when I try to generalize this like below, it gives me the total count of the folder instead of the count of each sub folder:
eval echo YZ $d '"$(ls "/A/B/YZ/$month/$d/"'{"$d"00.."$d"23})'| wc -l)"'
Something like this (not tested):
for d in [A-Z]/[A-Z]/[A-Z]/[A-Z]/[A-Z]/[0-9][0-9]
do
[[ -d $d ]] && echo $d : $(ls $d|wc -l)
done
Note that this gives an inccorect line count if one of the file names contains a newline character.

How to generate 3 natural number that sum to 60 using awk

I am trying to write awk script that generate 3 natural numbers that sum to 60. I am trying with rand function but I`ve got problem with sum to 60
Here is one way:
awk -v n=60 'BEGIN{srand();a=int(rand()*n);b=int(rand()*(n-a));c=n-a-b;
print a,b,c}'
Idea is:
generate random number a :0=<a<60
generate random number b :0=<b<60-a
c=60-a-b
here, I set a variable n=60, to make it easy if you have other sum.
If we run this one-liner 10 times, we get output:
kent$ awk 'BEGIN{srand();for(i=1;i<=10;i++){a=int(rand()*60);b=int(rand()*(60-a));c=60-a-b;print a,b,c}}'
46 7 7
56 1 3
26 15 19
14 12 34
44 6 10
1 36 23
32 1 27
41 0 19
55 1 4
54 1 5

Rearrange column with empty values using awk or sed

i want to rearrange the columns of a txt file, but there are empty values, which cause a problem. For example:
testfile:
Name ID Count Date Other
A 1 10 513 x
6 15 312 x
3 18 314 x
B 19 31 942 x
8 29 722 x
when i tried $ more testfile |awk '{print $2"\t"$1"\t"$3"\t"$4"\t"$5}'
it becomes:
ID Name Count Date Other
1 A 10 513 x
15 6 312 x
18 3 314 x
19 B 31 942 x
29 8 722 x
which is not i want, please help,i want it to be
ID Name Count Date Other
1 A 10 513 x
15 6 312 x
18 3 314 x
19 B 31 942 x
29 8 722 x
moreover i am not sure which columns might contain empty values, and the column length is not fixed, thank you
Assuming your input file is not tab-separated and you have (or can get) GNU awk then I recommend:
$ awk -v FIELDWIDTHS="8 8 8 8 8" -v OFS='\t' '{
for (i=1;i<=NF;i++) {
gsub(/^\s+|\s+$/,"",$i)
}
t=$1; $1=$2; $2=t'
}1' file
ID Name Count Date Other
1 A 10 513 x
6 15 312 x
3 18 314 x
19 B 31 942 x
8 29 722 x
If your file is tab-separated then all you need is:
awk 'BEGIN{FS=OFS="\t"} {t=$1; $1=$2; $2=t}1' file
Another awk alternative is using the number of fields. If you know your data and it's only deficit in the first column you can try this.
awk -v OFS="\t" 'NF==4{$5=$4;$4=$3;$3=$2;$2=$1;$1=""} {print $2,$1,$3,$4,$5}'
However, the output will be tab separated instead of fixed length format. You can achieve the same using printf and changing OFS, but perhaps tab separated is what you really need for tabular representation.
The most natural model for awk to use is columns as defined by the transitions from white-space to non-white-space and back. Since you have columns that may themselves be white-space, the natural model won't work.
However, you can revert to using a model based on column positions rather than transitions, meaning that a file containing only spaces (the presence of tabs will complicate things):
Name ID Count Date Other
A 1 10 513 x
6 15 312 x
3 18 314 x
B 19 31 942 x
8 29 722 x
can still be rearranged, though not as succinctly as transition-based columns.
The following awk script will do the trick, swapping name and id:
{
name = substr($0, 1,7);
id = substr($0, 9,7);
count = substr($0,17,7);
date = substr($0,25,7);
other = substr($0,33 );
print id" "name" "count" "date" "other;
}
If the original file is called pax.in and the awk script is stored in pax.awk, the command awk -f pax.awk pax.in will give you, as desired:
ID Name Count Date Other
1 A 10 513 x
6 15 312 x
3 18 314 x
19 B 31 942 x
8 29 722 x
Keep in mind I've written that script to be relatively flexible, allowing you to change the order of the columns quite easily. If all you want is to swap the first two columns, you could use:
awk '{print substr($0,9,8)substr($0,1,8)substr($0,17)}' qq.in
or the slightly shorter (if you're allowed to use other tools):
sed -E 's/^(.{8})(.{8})/\2\1/' qq.in

Resources