How would I get the number for CPU Utilization from running the top command on a specific user?
Pipe top to awk and add up the CPU utilization column:
top -b -n 1 -u username | awk 'NR>7 { sum += $9; } END { print sum; }'
That awk command says "For rows (from top) greater than row 7, add up the number in field 9 $9 and store it in variable sum. Once you have gone through all of the rows in the piped top command, print out the value in variable sum".
If the -u flag isn't working on your system, you can just stick the username search with the NR>7 condition:
top -b -n 1 | awk 'NR>7 && $2=="root" { sum += $9 } END { print sum }'
If you wanted to print the percent used for each user listed in top, you could chuck that second condition and switch sum to be an array:
top -b -n 1 | awk 'NR>7 {users[$2]+=($9)} END {for(user in users){print user,users[user]}}'
One last thing. I believe that the percent cpu may need to be divided by the number of cores in your system. Like... I believe it might be possible to see a number greater than 100 pop up here.
Related
I have written a shell script to find avg, min and max values from sar report for command sar -r -f as below
MIN1=`sar -r -f |awk '{print $5}'|grep -v '%memused'|awk 'min=="" || $0 < min {min=$0} END{ print min}'`
MAX1=`sar -r -f |awk '{print $5}'|grep -v '%memused'|grep -v '_x86_64_'|grep -v '86327'|awk 'min=="" || $0 > min {min=$0} END{ print min}'`
AVG1=`sar -r -f |awk '{print $5}'|grep -v '%memused'|grep -v '_x86_64_'|grep -v '86327'|awk '{total+=$0} END {print total/NR}'`
echo Minimum value is $MIN1 : average value is $AVG1 : maximum value is $MAX1
sar -r -f command o/p:
Linux 2.6.32-431.el6.x86_64 (servername) 11/03/2017 _x86_64_ (4 CPU)
12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit
Average: 315191 32697953 99.05 86327 16937751 25889218 53.25
script output:
Minimum value is 95.50 : average value is 868.053 : maximum value is 86327
Expected output:
Minimum value is 95 : average value is 96 : maximum value is 97
I have skipped the value using grep -v 86327 appearing in this last column but as it as it varies every time my output will vary.
Following single awk may help you in same.
sar -r -f | awk '
FNR>1&&!/memused/{
max=max>$5?max:$5;
min=min>$5?$5:(min?min:$5);
sum+=$5}
END{
print "Minimum:",min,"Average:",(sum/FNR),"Maximum:",max
}'
EDIT: Adding explanation for code too now.
sar -r -f | awk ' ## Sending sar standard output as a standard input by pipe to awk command.
FNR>1&&!/memused/{ ## checking if line number is greater than and not having string memused in it then do following.
max=max>$5?max:$5; ## creating a variable named max and checking if max variable value is greater than 5th field then keep it as max else keep it as 5th field to get the maximum values.
min=min>$5?$5:(min?min:$5); ##creating a variable named min and checking if min value is greater than 5th field if yes then change its value to $5 or check if min is there keep its value to min or if min is NULL then keep it as $5 value.
sum+=$5} ## creating variable named sum and adding its value to itself along with 5th fields value.
END{
print "Minimum:",min,"Average:",(sum/FNR),"Maximum:",max ##Printing string minimum and min value similarly with avg and maximum too.
}'
My simple gawk filter used in my program is not filtering out a value that is a digit longer than the rest.
Here is my text file:
172 East Fourth Street Toronto 4 1890 1500000 6
2213 Mt. Vernon Avenue Vaughn 2 890 500000 4
One Lincoln Plaza Toronto 2 980 900000 1
The columns are separated by tabs.
My gawk script:
echo "Enter max price"
read price
gawk -F "\t+" '$5 <= "'$price'"' file
The 1500000 value appears if I enter a value of 150001 or greater. I think it has to do with the gawk not reading the last digit correctly. I'm not permitted to change the original text file and I need to use the gawk command. Any help is appreciated!
Your awk command performs lexical comparison rather than numerical comparison, because the RHS - the price value - is enclosed in double-quotes.
Removing the double-quotes would help, but it's advisable to reformulate the command as follows:
gawk -F '\t+' -v price="$price" '$5 <= price' file
The shell variable $price is now passed to Awk using -v, as Awk variable price, which is the safe way to pass values to awk - you can then use a single-quoted awk script without having to splice in shell variables or having to worry about which parts may be expanded by the shell up front.
Afterthought: As Ed Morton points out in a comment, to ensure that a field or variable is treated as a number, append +0 to it; e.g., $5 <= price+0 (conversely, append "" to force treatment as a string).
By default, Awk infers from the values involved and the context whether to interpret a given value as a string or a number - which may not always give the desired result.
You're really calling a separate gawk for each column? One will do:
gawk -F "\t+" -v OFS="\t" \
-v city="$city" \
-v bedrooms="$bedrooms" \
-v space="$space" \
-v price="$price" \
-v weeks="$weeks" '
$2 == city && $3 >= bedrooms && $4 >= space && $5 <= price && $6 <= weeks {
$1 = $1; print
}
' listing |
sort -t $'\t' $sortby $ordering |
column -s $'\t' -t
(This is not an answer, just a comment that needs formatting)
The $1=$1 bit is an awk trick to make it rewrite the current record using the Output Field Separator, a single tab. Saves you a call to tr
I have file
Product Cost
a 10
b 20
c 30
I want to count no of products whose cost is greater than 10.
I have used:
awk '{$2>10}' myfile | wc -l
but its not functioning — I get the result 0 instead of 2 which I am expecting. What's wrong?
You have a condition wrapped in braces; you don't want that:
awk '$2 > 10' myfile | wc -l
As it was, the condition generated 0 or 1, but there was no print, so wc -l counted 0 (because awk produced no output).
Also, as Barmar points out, you can have awk do the counting without using wc.
You don't need to use wc, you can do the counting in awk. But the main issue is that the condition test needs to be outside the braces (or inside the braces and in an if statement).
awk '$2 > 10 { count++ } END {print count}' myfile
awk '{if($2 > 10} myfile | wc -1
I think this might be the solution, give it a try and let me know.
I have a list that has an ID, population, area and province, that looks like this:
1:517000:405212:Newfoundland and Labrador
2:137900:5660:Prince Edward Island
3:751400:72908:New Brunswick
4:938134:55284:Nova Scotia
5:7560592:1542056:Quebec
6:12439755:1076359:Ontario
7:1170300:647797:Manitoba
8:996194:651036:Saskatchewan
9:3183312:661848:Alberta
10:4168123:944735:British Comumbia
11:42800:1346106:Northwest Territories
12:31200:482443:Yukon Territories
13:29300:2093190:Nunavut
I need display the names of the provinces with the lowest and highest population density (population/area). How can I divide column 1 by column 2 (2 decimal places) but keep the file information in tact on either side (eg. 1:1.28:Newfoundland and Labrador). After that I figure I can just pump it into sort -t: -nk2 | head -n 1 and sort -t: -nrk2 | head -n 1 to pull them.
The recommended command given was grep.
Since you seem to have the sorting and extraction under control, here's an example awk script that should work for you:
#!/usr/bin/env awk -f
BEGIN {
FS=":"
OFS=":"
OFMT="%.2f"
}
{
print $1,$2/$3,$4
}
I have this array:
array=(1 2 3 4 4 3 4 3)
I can get the largest number with:
echo "num: $(printf "%d\n" ${array[#]} | sort -nr | head -n 1)"
#outputs 4
But i want to get all 4's add sum them up, meaning I want it to output 12 (there are 3 occurrences of 4) instead. any ideas?
dc <<<"$(printf '%d\n' "${array[#]}" | sort -n | uniq -c | tail -n 1) * p"
sort to get max value at end
uniq -c to get only unique values, with a count of how many times they appear
tail to get only the last line (with the max value and its count)
dc to multiply the value by the count
I picked dc for the multiplication step because it's RPN, so you don't have to split up the uniq -c output and insert anything in the middle of it - just add stuff to the end.
Using awk:
$ printf "%d\n" "${array[#]}" | sort -nr | awk 'NR>1 && p!=$0{print x;exit;}{x+=$0;p=$0;}'
12
Using sort, the numbers are sorted(-n) in reverse(-r) order, and the awk keeps summing the numbers till it finds a number which is different from the previous one.
You can do this with awk:
awk -v RS=" " '{sum[$0]+=$0; if($0>max) max=$0} END{print sum[max]}' <<<"${array[#]}"
Setting RS (record separator) to space allows you to read your array entries as separate records.
sum[$0]+=$0; means sum is a map of cumulative sums for each input value; if($0>max) max=$0 calculates the max number seen so far; END{print sum[max]} prints the sum for the larges number seen at the end.
<<<"${array[#]}" is a here-document that allows you to feed a string (in this case all elements of the array) as stdin into awk.
This way there is no piping or looping involved - a single command does all the work.
Using only bash:
echo $((${array// /+}))
Replace all spaces with plus, and evaluate using double-parentheses expression.