Gawk print largest value from each column - linux

I am writing a awk script that takes some columns of input in a text file and print out the largest value in each column
Input:
$cat numbers
10 20 30.3 40.5
20 30 45.7 66.1
40 75 107.2 55.6
50 20 30.3 40.5
60 30 45.O 66.1
70 1134.7 50 70
80 75 107.2 55.6
Output:
80 1134.7 107.2 70
Script:
BEGIN {
val=0;
line=1;
}
{
if( $2 > $3 )
{
if( $2 > val )
{
val=$2;
line=$0;
}
}
else
{
if( $3 > val )
{
val=$3;
line=$0;
}
}
}
END{
print line
}
Current output:
60 30 45.O 66.1
What am I doing wrong first awk script
=======SOLUTION======
END {
for (i = 0; ++i <= NF;)
printf "%s", (m[i] (i < NF ? FS : RS))
}
{
for (i = 0; ++i <= NF;)
$i > m[i] && m[i] = $i
}
Thanks for the help

Since you have four columns, you'll need at least four variables, one for each column (or an array if you prefer). And you won't need to hold any line in its entirety. Treat each column independently.

You need to adapt something like the following for your purposes which will find the maximum in a particular column (the second in this case).
awk 'BEGIN {max = 0} {if ($2>max) max=$2} END {print max}' numbers.dat
The approach you are taking with $2 > $3 seems to be comparing two columns with each other.

You can create one user defined function and then pass individual column arrays to it to retrieve the max value. Something like this -
[jaypal:~/Temp] cat numbers
10 20 30.3 40.5
20 30 45.7 66.1
40 75 107.2 55.6
50 20 30.3 40.5
60 30 45.O 66.1
70 1134.7 50.0 70
80 75 107.2 55.6
[jaypal:~/Temp] awk '
function max(x){i=0;for(val in x){if(i<=x[val]){i=x[val];}}return i;}
{a[$1]=$1;b[$2]=$2;c[$3]=$3;d[$4]=$4;next}
END{col1=max(a);col2=max(b);col3=max(c);col4=max(d);print col1,col2,col3,col4}' numbers
80 1134.7 107.2 70
or
awk 'a<$1{a=$1}b<$2{b=$2}c<$3{c=$3}d<$4{d=$4} END{print a,b,c,d}' numbers

Related

How to print contents of column fields that have strings composed of "n" character/s using bash?

Say I have a file which contains:
22 30 31 3a 31 32 3a 32 " 0 9 : 1 2 : 2
30 32 30 20 32 32 3a 31 1 2 7 2 2 : 1
And, I want to print only the column fields that have string composed of 1 character. I want the output to be like this:
" 0 9 : 1 2 : 2
1 2 7 2 2 : 1
Then, I want to print only those strings that are composed of two characters, the output should be:
22 30 31 3a 31 32 3a 32
30 32 30 20 32 32 3a 31
I am a beginner and I really don't know how to do this. Thanks for your help!
Could you please try following, I am trying it in a different way for provided samples. Written and tested with provided samples only.
For getting values before BULK SPACE try:
awk '
{
line=$0
while(match($0,/[[:space:]]+/)){
arr=arr>RLENGTH?arr:RLENGTH
start[arr]+=RSTART+prev_start
prev_start=RSTART
$0=substr($0,RSTART+RLENGTH)
}
var=substr(line,1,start[arr]-1)
sub(/ +$/,"",var)
print var
delete start
var=arr=""
}
' Input_file
Output will be as follows.
22 30 31 3a 31 32 3a 32
30 32 30 20 32 32 3a 31
For getting values after BULK SPACE try:
awk '
{
line=$0
while(match($0,/[[:space:]]+/)){
arr=arr>RLENGTH?arr:RLENGTH
start[arr]+=RSTART+prev_start
prev_start=RSTART
$0=substr($0,RSTART+RLENGTH)
}
var=substr(line,start[arr])
sub(/^ +/,"",var)
print var
delete start
var=arr=""
}
' Input_file
Output will be as follows:
" 0 9 : 1 2 : 2
1 2 7 2 2 : 1
You can try
awk '{for(i=1;i<=NF;++i)if(length($i)==1)printf("%s ", $i);print("")}'
For each field, check the length and print it if it's desired. You may pass the -F option to awk if it's not separated by blanks.
The awk script is expanded as:
for( i = 1; i <= NF; ++i )
if( length( $i ) == 1 )
printf( "%s ", $i );
print( "" );
The print outside loop is to print a newline after each input line.
Assuming all the columns are tab-separated (So you can have a space as a column value like the second line of your sample), easy to do with a perl one-liner:
$ perl -F"\t" -lane 'BEGIN { $, = "\t" } print grep { /^.$/ } #F' foo.txt
" 0 9 : 1 2 : 2
1 2 7 2 2 : 1
$ perl -F"\t" -lane 'BEGIN { $, = "\t" } print grep { /^..$/ } #F' foo.txt
22 30 31 3a 31 32 3a 32
30 32 30 20 32 32 3a 31

Use printf to format list that is uneven

I have a small list of student grades, I need to format it them side by side depending on the gender of the student. So one column is Male the other Female. The problem is the list doesn't go male female male female, it is uneven.
I've tried using printf to format the output so the 2 columns are side by side, but the format is ruined because of the uneven list.
Name Gender Mark1 Mark2 Mark3
AA M 20 15 35
BB F 22 17 44
CC F 19 14 25
DD M 15 20 42
EE F 18 22 30
FF M 0 20 45
This is the list I am talking about ^^
awk 'BEGIN {print "Male" " Female"} {if (NR!=1) {if ($2 == "M") {printf "%-s %-s %-s", $3, $4, $5} else if ($2 == "F") {printf "%s %s %s\n", $3, $4 ,$5}}}' text.txt
So I'm getting results like
Male Female
20 15 35 22 17 44
19 14 25
15 20 42 18 22 30
0 20 45
But I want it like this:
Male Female
20 15 35 22 17 44
15 20 42 19 14 25
0 20 45 18 22 30
I haven't added separators yet I'm just trying to figure this out, not sure if it would be better to put the marks into 2 arrays depending on gender then printing them out.
another solution tries to address if M/F is not unity
$ awk 'NR==1 {print "Male\tFemale"}
NR>1 {k=$2;$1=$2="";sub(/ +/,"");
if(k=="M") m[++mc]=$0; else f[++fc]=$0}
END {max=mc>fc?mc:fc;
for(i=1;i<=max;i++) print (m[i]?m[i]:"-") "\t" (f[i]?f[i]:"-")}' file |
column -ts$'\t'
Male Female
20 15 35 22 17 44
15 20 42 19 14 25
0 20 45 18 22 30
Something like this?
awk 'BEGIN{format="%2s %2s %2s %2s\n";printf("Male Female\n"); }NR>1{if (s) { if ($2=="F") {printf(format, s, $3, $4, $5);} else {printf(format, $3,$4,$5,s);} s=""} else {s=sprintf("%2s %2s %2s", $3, $4, $5)}}' file
Another approach using awk
awk '
BEGIN {
print "Male\t\tFemale"
}
NR > 1 {
I = ++G[$2]
A[$2 FS I] = sprintf("%2d %2d %2d", $(NF-2), $(NF-1), $NF)
}
END {
M = ( G["M"] > G["F"] ? G["M"] : G["F"] )
for ( i = 1; i <= M; i++ )
print A["M" FS i] ? A["M" FS i] : OFS, A["F" FS i] ? A["F" FS i] : OFS
}
' OFS='\t' file
This might work for you (GNU sed):
sed -Ee '1c\Male Female' -e 'N;s/^.. M (.*)\n.. F(.*)/\1\2/;s/^.. F(.*)\n.. M (.*)/\2\1/' file
Change the header line. Then compare a pair of lines and re-arrange them as appropriate.

How to sort or rearrange numbers from multiple column into multiple row [fixed into 4 columns]?

I have 1 text file, which is test1.txt.
text1.txt contain as following:
Input:
##[A1] [B1] [T1] [V1] [T2] [V2] [T3] [V3] [T4] [V4]## --> headers
1 1000 0 100 10 200 20 300 30 400
40 500 50 600 60 700 70 800
1010 0 101 10 201 20 301 30 401
40 501 50 601
2 1000 0 110 15 210 25 310 35 410
45 510 55 610 65 710
1010 0 150 10 250 20 350 30 450
40 550
Condition:
A1 and B1 -> for each A1 + (B1 + [Tn + Vn])
A1 should be in 1 column.
B1 should be in 1 column.
T1,T2,T3 and T4 should be in 1 column.
V1,V2,V3 and V4 should be in 1 column.
How do I sort it become like below?
Desire Output:
## A1 B1 Tn Vn ## --> headers
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
Here is my current code:
First Attempt:
Input
cat test1.txt | awk ' { a=$1 b=$2 } { for(i=1; i<=5; i=i+1) { t=substr($0,11+i*10,5) v=substr($0,16+i*10,5) if( t ~ /^\ +[0-9]+$/ || t ~ /^[0-9]+$/ || t ~ /^\ +[0-9]+\ +$/ ){ printf "%7s %7d %8d %8d \n",a,b,t,v } }}' | less
Output:
1 1000 400 0
40 500 800 0
1010 0 401 0
2 1000 410 0
1010 0 450 0
I'm trying using simple awk command, but still can't get the result.
Can anyone help me on this?
Thanks,
Am
Unlike what is stated elsewhere, there's nothing tricky about this at all, you're just using fixed width fields in your input instead of char/string separated fields.
With GNU awk for FIELDWIDTHS to handle fixed width fields it really couldn't be much simpler:
$ cat tst.awk
BEGIN {
# define the width of the input and output fields
FIELDWIDTHS = "2 4 5 5 6 5 6 5 6 5 6 99"
ofmt = "%2s%5s%6s%5s%6s%s\n"
}
{
# strip leading/trailing blanks and square brackets from every field
for (i=1; i<=NF; i++) {
gsub(/^[[\s]+|[]\s]+$/,"",$i)
}
}
NR==1 {
# print the header line
printf ofmt, $1, $2, $3, "Tn", "Vn", " "$NF
next
}
{
# print every other line
for (i=4; i<NF; i+=2) {
printf ofmt, $1, $2, $3, $i, $(i+1), ""
$1 = $2 = $3 = ""
}
}
.
$ awk -f tst.awk file
## A1 B1 Tn Vn ## --> headers
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
With other awks you'd use a while() { substr() } loop instead of FIELDWIDTHS so it'd be a couple more lines of code but still trivial.
The above will be orders of magnitude faster than an equivalent shell script. See https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice.
This isn't easy because it is hard to identify when you have the different styles of row — those with values in both column 1 and column 2, those with no value in column 1 and a value in column 2, and those no value in column 1 or 2. A first step is to make this easier — sed to the rescue:
$ sed 's/[[:space:]]\{1,\}$//
s/^....../&|/
s/|....../&|/
:a
s/|\( *[0-9][0-9]* \)\( *[^|]\)/|\1|\2/
t a' data
1 | 1000 | 0 | 100 | 10 | 200 | 20 | 300 | 30 | 400
| | 40 | 500 | 50 | 600 | 60 | 700 | 70 | 800
| 1010 | 0 | 101 | 10 | 201 | 20 | 301 | 30 | 401
| | 40 | 501 | 50 | 601
2 | 1000 | 0 | 110 | 15 | 210 | 25 | 310 | 35 | 410
| | 45 | 510 | 55 | 610 | 65 | 710
| 1010 | 0 | 150 | 10 | 250 | 20 | 350 | 30 | 450
| | 40 | 550
$
The first line removes any trailing white space, to avoid confusion. The next two expressions handle the fixed-width columns 1 and 2 (6 characters each). The next line creates a label a; the substitute finds a pipe |, some spaces, some digits, a space, and some trailing material which doesn't include a pipe; and inserts a pipe in the middle. The t a jumps back to the label if a substitution was done.
With that in place, it becomes easy to manage awk with a field separator of |.
This is verbose, but seems to do the trick:
awk -F '|' '
$1 > 0 { printf "%5d %4d %3d %3d\n", $1, $2, $3, $4
for (i = 5; i <= NF; i += 2) { printf "%5s %4s %3d %3d\n", "", "", $i, $(i+1) }
next
}
$2 > 0 { printf "%5s %4d %3d %3d\n", "", $2, $3, $4
for (i = 5; i <= NF; i += 2) { printf "%5s %4s %3d %3d\n", "", "", $i, $(i+1) }
next
}
{ for (i = 3; i <= NF; i += 2) { printf "%5s %4s %3d %3d\n", "", "", $i, $(i+1) }
next
}'
Output:
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
If you need to remove the headings, add 1d; to the start of the sed script.
This might work for you (GNU sed):
sed -r '1d;s/^(.{11}).{11}/&\n\1/;s/^((.{5}).*\n)\2/\1 /;s/^(.{5}(.{6}).*\n.{5})\2/\1 /;/\S/P;D' file
Delete the first line (if the header is needed see below). The key fields occupy the first 11 (the first key is 5 characters and the second 6) characters and the data fields occupy the next 11. Insert a newline and the key fields before each pair of data fields. Compare the keys on adjacent lines and replace by spaces if they are duplicated. Do not print any blank lines.
If the header is needed, use the following:
sed -r '1{s/\[[^]]+\]\s*//5g;y/[]/ /;s/1/n/3g;s/B/ B/;G;b};s/^(.{11}).{11}/&\n\1/;s/^((.{5}).*\n)\2/\1 /;s/^(.{5}(.{6}).*\n.{5})\2/\1 /;/\S/P;D' file
This does additional formatting on the first line to remove superfluous headings, []'s, replace 1's by n, add an additional space for alignment and a following empty line.
Further more. By utilising the second line of the input file as a template for the data, a sed script can be created that does not have any hard coded values:
sed -r '2!d;s/\s*\S*//3g;s/.\>/&\n/;h;s/[^\n]/./g;G;s/[^\n.]/ /g;s#(.*)\n(.*)\n(.*)\n(.*)#1d;s/^(\1\2)\1\2/\&\\n\\1/;s/^((\1).*\\n)\\2/\\1\3/;s/^(\1(\2).*\\n\1)\\2/\\1\4/;/\\S/P;D#' file |
sed -r -f - file
The script created from the template is piped into a second invocation of the sed as a file and run against the original file to produce the required output.
Likewise the headers may be formatted if need be as so:
sed -r '2!d;s/\s*\S*//3g;s/.\>/&\n/;h;s/[^\n]/./g;G;s/[^\n.]/ /g;s#(.*)\n(.*)\n(.*)\n(.*)#s/^(\1\2)\1\2/\&\\n\\1/;s/^((\1).*\\n)\\2/\\1\3/;s/^(\1(\2).*\\n\1)\\2/\\1\4/;/\\S/P;D#' file |
sed -r -e '1{s/\[[^]]+\]\s*//5g;y/[]/ /;s/1/n/3g;s/B/ B/;G;b}' -f - file
By extracting the first four fields from the second line of the input file, Four variables can be made. Two regexp and two values. These variables can be used to build the sed script.
N.B. The sed script is created from strings extracted from the template and the variables produced are also strings so they can be concatenated to produce further new regexp's and new values etc etc
This is a rather tricky problem that can be handled a number of ways. Whether bash, perl or awk, you will need to handle to number of fields in some semi-generic way instead of just hardcoding values for your example.
Using bash, so long as you can rely on an even-number of fields in all lines (except for the lines with the sole initial value (e.g. 1010), you can accommodate the number of fields is a reasonably generic way. For the lines with 1, 2, etc.. you know your initial output will contain 4-fields. For lines with 1010, etc.. you know the output will contain an initial 3-fields. For the remaining values you are simply outputting pairs.
The tricky part is handling the alignment. Here is where printf which allows you to set the field-width with a parameter using the form "%*s" where the conversion specifier expects the next parameter to be an integer value specifying the field-width followed by a parameter for the string conversion itself. It takes a little gymnastics, but you could do something like the following in bash itself:
(note: edit to match your output header format)
#!/bin/bash
declare -i nfields wd=6 ## total no. fields, printf field-width modifier
while read -r line; do ## read each line (preserve for header line)
arr=($line) ## separate into array
first=${arr[0]} ## check for '#' in first line for header
if [ "${first:0:1}" = '#' ]; then
nfields=$((${#arr[#]} - 2)) ## no. fields in header
printf "## A1 B1 Tn Vn ## --> headers\n" ## new header
continue
fi
fields=${#arr[#]} ## fields in line
case "$fields" in
$nfields ) ## fields -eq nfiles?
cnt=4 ## handle 1st 4 values in line
printf " "
for ((i=0; i < cnt; i++)); do
if [ "$i" -eq '2' ]; then
printf "%*s" "5" "${arr[i]}"
else
printf "%*s" "$wd" "${arr[i]}"
fi
done
echo
for ((i = cnt; i < $fields; i += 2)); do ## handle rest
printf "%*s%*s%*s\n" "$((2*wd))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
done
;;
$((nfields - 1)) ) ## one less than nfields
cnt=3 ## handle 1st 3 values
printf " %*s%*s" "$wd" " "
for ((i=0; i < cnt; i++)); do
if [ "$i" -eq '1' ]; then
printf "%*s" "5" "${arr[i]}"
else
printf "%*s" "$wd" "${arr[i]}"
fi
done
echo
for ((i = cnt; i < $fields; i += 2)); do ## handle rest
if [ "$i" -eq '0' ]; then
printf "%*s%*s%*s\n" "$((wd+1))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
else
printf "%*s%*s%*s\n" "$((2*wd))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
fi
done
;;
* ) ## all other lines format as pairs
for ((i = 0; i < $fields; i += 2)); do
printf "%*s%*s%*s\n" "$((2*wd))" " " "$wd" "${arr[i]}" "$wd" "${arr[$((i+1))]}"
done
;;
esac
done
Rather than reading from a file, just use redirection to redirect the input file to your script (if you want to just provide a filename, then redirect the file to feed the output while read... loop)
Example Use/Output
$ bash text1format.sh <dat/text1.txt
## A1 B1 Tn Vn ## --> headers
1 1000 0 100
10 200
20 300
30 400
40 500
50 600
60 700
70 800
1010 0 101
10 201
20 301
30 401
40 501
50 601
2 1000 0 110
15 210
25 310
35 410
45 510
55 610
65 710
1010 0 150
10 250
20 350
30 450
40 550
As between awk and bash, awk will generally be faster, but here with formatted output, it may be closer than usual. Look things over and let me know if you have questions.

Find lines with a common value in a particular column

Suppose I have a file like this
5 kata 45 buu
34 tuy 3 rre
21 ppo 90 ty
21 ret 60 buu
09 ret 89 ty
21 plk 1 uio
23 kata 90 ty
I want to have in output only the lines that contains repetead values on the 4th column. Therefore, my desired output would be this one:
5 kata 45 buu
21 ppo 90 ty
21 ret 60 buu
09 ret 89 ty
23 kata 90 ty
How can I perform this task?
I can identify and isolate the column of my interest with:
awk -F"," '{print $4}' file1 > file1_temp
and then check if there are repeated values and how many with:
awk '{dups[$1]++} END{for (num in dups) {print num,dups[num]}}' file1_temp
but that's not definitely what I would like to do..
A simple way to preserve the ordering would be to run through the file twice. The first time, keep a record of the counts, then print the ones with a count greater than 1 on the second pass:
awk 'NR == FNR { ++count[$4]; next } count[$4] > 1' file file
If you prefer not to loop through the file twice, you can keep track of things in a few arrays and do the printing in the END block:
awk '{ line[NR] = $0; col[NR] = $4; ++count[$4] }
END { for (i = 1; i <= NR; ++i) if (count[col[i]] > 1) print line[i] }' file
Here line stores the contents of the whole line, col stores the fourth column and count does the same as before.

Compare two files having different column numbers and print the requirement to a new file if condition satisfies

I have two files with more than 10000 rows:
File1 has 1 col File2 has 4 col
23 23 88 90 0
34 43 74 58 5
43 54 87 52 3
54 73 52 35 4
. .
. .
I want to compare each value in file-1 with that in file-2. If exists then print the value along with other three values in file-2. In this example output will be:
23 88 90 0
43 74 58 5
54 87 52 3
.
.
I have written following script, but it is taking too much time to execute.
s1=1; s2=$(wc -l < File1.txt)
while [ $s1 -le $s2 ]
do n=$(awk 'NR=="$s1" {print $1}' File1.txt)
p1=1; p2=$(wc -l < File2.txt)
while [ $p1 -le $p2 ]
do awk '{if ($1==$n) printf ("%s %s %s %s\n", $1, $2, $3, $4);}'> ofile.txt
(( p1++ ))
done
(( s1++ ))
done
Is there any short/ easy way to do it?
You can do it very shortly using awk as
awk 'FNR==NR{found[$1]++; next} $1 in found'
Test
>>> cat file1
23
34
43
54
>>> cat file2
23 88 90 0
43 74 58 5
54 87 52 3
73 52 35 4
>>> awk 'FNR==NR{found[$1]++; next} $1 in found' file1 file2
23 88 90 0
43 74 58 5
54 87 52 3
What it does?
FNR==NR Checks if FNR file number of record is equal to NR total number of records. This will be same only for the first file, file1 because FNR is reset to 1 when awk reads a new file.
{found[$1]++; next} If the check is true then creates an associative array indexed by $1, the first column in file1
$1 in found This check is only done for the second file, file2. If column 1 value, $1 is and index in associative array found then it prints the entire line ( which is not written because it is the default action)

Resources