awk key-value issue for array - linux

I meet a awk array issue, details as below:
[~/temp]$ cat test.txt
1
2
3
4
1
2
3
Then I want to count the frequency of the number.
[~/temp]$ awk 'num[$1]++;END{for (i in num){printf("%s\t%-s\n", num[i],i)|"sort -r -n -k1"} }' test.txt
1
2
3
2 3
2 2
2 1
1 4
As you see, why does the output of first 3 line '1 2 3' will come blank value?
Thank for your answer.

An awk statement consists of a pattern and related action. Omitted pattern matches every record of input. Omitted action is an alias to {print $0}, ie. output the current record, which is what you are getting. Looking at the first part of your program:
$ awk 'num[$1]++' file
1
2
3
Let's change that a bit to understand what happens there:
$ awk '{print "NR:",NR,"num["$1"]++:",num[$1]++}' file
NR: 1 num[1]++: 0
NR: 2 num[2]++: 0
NR: 3 num[3]++: 0
NR: 4 num[4]++: 0
NR: 5 num[1]++: 1
NR: 6 num[2]++: 1
NR: 7 num[3]++: 1
Since you are using postfix operator num[$1]++ in the pattern, on records 1-4 it gets evaluated to 0 before it's value is incremented. Output would be different if you used the prefix operator ++num[$1] which would first increment the value of the variable after which it would get evaluated and would lead to outputing every record of input, not just the last three, which you were getting.
Correct way would've been to use num[$1]++ as an action, not as a pattern:
$ awk '{num[$1]++}' file

Put your "per line" part in {} i.e. { num[$1]++; }
awk programs a a collection of [pattern] { actions } (the pattern is optional, the {} is not). Seems that in your case your line is being treated as the pattern.

Related

How can a Linux cpuset be iterated over with shell?

Linux's sys filesystem represents sets of CPU ids with the syntax:
0,2,8: Set of CPUs containing 0, 2 and 8.
4-6: Set of CPUs containing 4, 5 and 6.
Both syntaxes can be mixed and matched, for example: 0,2,4-6,8
For example, running cat /sys/devices/system/cpu/online prints 0-3 on my machine which means CPUs 0, 1, 2 and 3 are online.
The problem is the above syntax is difficult to iterate over using a for loop in a shell script. How can the above syntax be converted to one more conventional such as 0 2 4 5 6 8?
Try:
$ echo 0,2,4-6,8 | awk '/-/{for (i=$1; i<=$2; i++)printf "%s%s",i,ORS;next} 1' ORS=' ' RS=, FS=-
0 2 4 5 6 8
This can be used in a loop as follows:
for n in $(echo 0,2,4-6,8 | awk '/-/{for (i=$1; i<=$2; i++)printf "%s%s",i,ORS;next} 1' RS=, FS=-)
do
echo cpu="$n"
done
Which produces the output:
cpu=0
cpu=2
cpu=4
cpu=5
cpu=6
cpu=8
Or like:
printf "%s" 0,2,4-6,8 | awk '/-/{for (i=$1; i<=$2; i++)printf "%s%s",i,ORS;next} 1' RS=, FS=- | while read n
do
echo cpu="$n"
done
Which also produces:
cpu=0
cpu=2
cpu=4
cpu=5
cpu=6
cpu=8
How it works
The awk command works as follows:
RS=,
This tells awk to use , as the record separator.
If, for example, the input is 0,2,4-6,8, then awk will see four records: 0 and 2 and 4-6 and 8.
FS=-
This tells awk to use - as the field separator.
With FS set this way and if, for example, the input record consists of 2-4, then awk will see 2 as the first field and 4 as the second field.
/-/{for (i=$1; i<=$2; i++)printf "%s%s",i,ORS;next}
For any record that contains -, we print out each number starting with the value of the first field, $1, and ending with the value of the second field, $2. Each such number is followed by the Output Record Separator, ORS. By default, ORS is a newline character. For some of the examples above, we set ORS to a blank.
After we have printed these numbers, we skip the rest of the commands and jump to the next record.
1
If we get here, then the record did not contain - and we print it out as is. 1 is awk's shorthand for print-the-line.
A Perl one:
echo "0,2,4-6,8" | perl -lpe 's/(\d+)-(\d+)/{$1..$2}/g; $_="echo {$_}"' | bash
Just convert the original string into echo {0,2,{4..6},8} and let bash 'brace expansion' to interpolate it.
eval echo $(cat /sys/devices/system/cpu/online | sed 's/\([[:digit:]]\+\)-\([[:digit:]]\+\)/$(seq \1 \2)/g' | tr , ' ')
Explanation:
cat /sys/devices/system/cpu/online reads the file from sysfs. This can be changed to any other file such as offline.
The output is piped through the substitution s/\([[:digit:]]\+\)-\([[:digit:]]\+\)/$(seq \1 \2)/g. This matches something like 4-6 and replaces it with $(seq 4 6).
tr , ' ' replaces all commas with spaces.
At this point, the input 0,2,4-6,8 is transformed to 0 2 $(seq 4 6) 8. The final step is to eval this sequence to get 0 2 4 5 6 8.
The example echo's the output. Alternatively, it can be written to a variable or used in a for loop.

Compare 2 files using awk-If 2nd field is same,sum the 1st field & print it-if not print it(true for non-matching entries in both the files)

I have two files -
File 1:
2 923000026531
1 923000031178
2 923000050000
1 923000050278
1 923000051178
1 923000060000
File 2:
2 923000050000
3 923000050278
1 923000051178
1 923000060000
4 923000026531
1 923335980059
I want to achieve the following using awk:
1- If 2nd field is same, sum the 1st field and print it.
2- If 2nd field is not same, print the line as it is. This will have two cases.
2(a) If 2nd field is not same & record belongs to first file
2(b) If 2nd field is not same & record belongs to second file
I have achieved the following using this command:
Command: gawk 'FNR==NR{f1[$2]=$1;next}$2 in f1{print f1[$2]+$1,$2}!($2 in f1){print $0}' f1 f2
Result:
4 923000050000
4 923000050278
2 923000051178
2 923000060000
6 923000026531
1 923335980059
However, this doesn't contains the records which were in first file & whose second field didn't match that of the second file i.e. case 2(a), to be more specific, the following record is not present in the final file:
1 923000031178
I know there are multiple work around using extra commands but I am interested if this can be somehow done in the same command.
give this one-liner a try:
$ awk '{a[$2]+=$1}END{for(x in a)print a[x], x}' f1 f2
2 923000060000
2 923000051178
1 923000031178
6 923000026531
4 923000050278
4 923000050000
1 923335980059

Extract substring from first column

I have a large text file with 2 columns. The first column is large and complicated, but contains a name="..." portion. The second column is just a number.
How can I produce a text file such that the first column contains ONLY the name, but the second column stays the same and shows the number? Basically, I want to extract a substring from the first column only AND have the 2nd column stay unaltered.
Sample data:
application{id="1821", name="app-name_01"} 0
application{id="1822", name="myapp-02", optionalFlag="false"} 1
application{id="1823", optionalFlag="false", name="app_name_public"} 3
...
So the result file would be something like this
app-name_01 0
myapp-02 1
app_name_public 3
...
If your actual Input_file is same as the shown sample then following code may help you in same.
awk '{sub(/.*name=\"/,"");sub(/\".* /," ")} 1' Input_file
Output will be as follows.
app-name_01 0
myapp-02 1
app_name_public 3
Using GNU awk
$ awk 'match($0,/name="([^"]*)"/,a){print a[1],$NF}' infile
app-name_01 0
myapp-02 1
app_name_public 3
Non-Gawk
awk 'match($0,/name="([^"]*)"/){t=substr($0,RSTART,RLENGTH);gsub(/name=|"/,"",t);print t,$NF}' infile
app-name_01 0
myapp-02 1
app_name_public 3
Input:
$ cat infile
application{id="1821", name="app-name_01"} 0
application{id="1822", name="myapp-02", optionalFlag="false"} 1
application{id="1823", optionalFlag="false", name="app_name_public"} 3
...
Here's a sed solution:
sed -r 's/.*name="([^"]+).* ([0-9]+)$/\1 \2/g' Input_file
Explanation:
With the parantheses your store in groups what's inbetween.
First group is everything after name=" till the first ". [^"] means "not a double-quote".
Second group is simply "one or more numbers at the end of the line preceeded with a space".

how to print 3rd field in 3rd column itself

In my file I have 3 fields, I want to print only the third field in the third column only but output is getting to the first row. Please check my file and output:
cat filename
1st field 2nd field 3rd field
--------- --------- -----------
a,b,c,d d,e,f,g,h 1,2,3,4,5,5
q,w,e,r t,y,g,t,i 9,8,7,6,5,5
I'm using the following command to print the third field only in the third column
cat filename |awk '{print $3}' |tr ',' '\n'
OUTPUT printing 3rd field strings in the 1st field place, i want that to print in only 3rd field area only
first field :-
---------------
1
2
3
4
5
5
expected output
1st field 2nd field 3rd field
--------- --------- -----------
a,b,c,d d,e,f,g,h 1
2
3
4
5
5
q,w,e,r t,y,g,t,i 9
8
7
6
5
5
Input
[akshay#localhost tmp]$ cat file
1st field 2nd field 3rd field
--------- --------- -----------
a,b,c,d d,e,f,g,h 1,2,3,4,5,5
q,w,e,r t,y,g,t,i 9,8,7,6,5,5
Script
[akshay#localhost tmp]$ cat test.awk
NR<3 || !NF{ print; next}
{
split($0,D,/[^[:space:]]*/)
c1=sprintf("%*s",length($1),"")
c2=sprintf("%*s",length($2),"")
split($3,A,/,/)
for(i=1; i in A; i++)
{
if(i==2)
{
$1 = c1
$2 = c2
}
printf("%s%s%s%s%d\n",$1,D[2],$2,D[3],A[i])
}
}
Output
[akshay#localhost tmp]$ awk -f test.awk file
1st field 2nd field 3rd field
--------- --------- -----------
a,b,c,d d,e,f,g,h 1
2
3
4
5
5
q,w,e,r t,y,g,t,i 9
8
7
6
5
5
Explanation
NR<3 || !NF{ print; next}
NR gives you the total number of records being processed or line number, in short NR variable has line number.
NF gives you the total number of fields in a record.
The next statement forces awk to immediately stop processing the
current record and go on to the next record.
If line number is less than 3 or not NF (meaning no fields in record that is blank line), print current record and go to next record.
split($0,D,/[^[:space:]]*/)
Since we are interested to preserve the formatting, so we are saving separators between fields on array D here, if you have GNU awk you can make use of 4th arg for split() - it lets you split the line into 2 arrays, one of the fields and the other of the separators between the fields and then you can just operate on the fields array and print using the separators array between each field array element to rebuild the original $0.
c1=sprintf("%*s",length($1),"") and c2=sprintf("%*s",length($2),"")
Here sprintf function is used to fill space char of field ($1 or $2) length.
split($3,A,/,/)
split(string, array [, fieldsep [, seps ] ])
Divide string into pieces separated by fieldsep and store the pieces
in array and the separator strings in the seps array. The first piece
is stored in array[1], the second piece in array[2], and so forth. The
string value of the third argument, fieldsep, is a regexp describing
where to split string (much as FS can be a regexp describing where to
split input records). If fieldsep is omitted, the value of FS is used.
split() returns the number of elements created.
Loop till as long as i in A is true, I just came to know that i=1 and i++ control the order of traversal of the array, Thanks to Ed Morton
if(i==2)
{
$1 = c1
$2 = c2
}
when i = 1 we print a,b,c,d and d,e,f,g,h, in next iteration we modify $1 and $2 value with c1 and c2 we created above since you are interested to show only once as requested.
printf("%s%s%s%s%d\n",$1,D[2],$2,D[3],A[i])
Finally print field1 ($1), separator between field1 and field2 to we saved above, that is D[2], field2 ($2), separator between field2 and field3 and array A element only by one which we created from (split($3,A,/,/)).
$ cat tst.awk
NR<3 || !NF { print; next }
{
front = gensub(/((\S+\s+){2}).*/,"\\1","")
split($3,a,/,/)
for (i=1;i in a;i++) {
print front a[i]
gsub(/\S/," ",front)
}
}
$ awk -f tst.awk file
1st field 2nd field 3rd field
--------- --------- -----------
a,b,c,d d,e,f,g,h 1
2
3
4
5
5
q,w,e,r t,y,g,t,i 9
8
7
6
5
5
The above uses GNU awk for gensub(), with other awks use match()+substr(). It also uses \S and \s shorthand for [^[:space:]] and [[:space:]].
Considering the columns are tab separated, I would say:
awk 'BEGIN{FS=OFS="\t"}
NR<=2 || !NF {print; next}
NR>2{n=split($3,a,",")
for (i=1;i<=n; i++)
print (i==1?$1 OFS $2:"" OFS ""), a[i]
}' file
This prints the 1st, 2nd and empty lines normally
Then, slices the 3rd field using the comma as separator.
Finally, loops through the amount of pieces printing each one every time; it prints the first two columns the first time, then just the last value.
Test
$ awk 'BEGIN{FS=OFS="\t"} NR<=2 || !NF {print; next} NR>2{n=split($3,a,","); for (i=1;i<=n; i++) print (i==1?$1 OFS $2:"" OFS ""), a[i]}' a
1st field 2nd field 3rd field
--------- --------- -----------
a,b,c,d d,e,f,g,h 1
2
3
4
5
5
q,w,e,r t,y,g,t,i 9
8
7
6
5
5
Note the output is a bit ugly, since tab separating the columns lead them like this.

write a two column file from two files using awk

I have two files of one column each
1
2
3
and
4
5
6
I want to write a unique file with both elements as
1 4
2 5
3 6
It should be really simple I think with awk.
You could try paste -d ' ' <file1> <file2>. (Without -d ' ' the delimiter would be tab.)
paste works okay for the example given but it doesn't handle variable length lines very well. A nice little-know core-util pr provides a more flexible solution:
$ pr -mtw 4 file1 file2
1 4
2 5
3 6
A variable length example:
$ pr -mtw 22 file1 file2
10 4
200 5
300,000,00 6
And since you asked about awk here is one way:
$ awk '{a[FNR]=a[FNR]$0" "}END{for(i=1;i<=length(a);i++)print a[i]}' file1 file2
1 4
2 5
3 6
Using awk
awk 'NR==FNR { a[FNR]=$0;next } { print a[FNR],$0 }' file{1,2}
Explanation:
NR==FNR will ensure our first action statement runs for first file only.
a[FNR]=$0 with this we are inserting first file into array a indexed at line number
Once first file is complete we move to second action
Here we print each line of first file along with second file

Resources