Here is the output of my netstat command. I want to count total of first field number like 7+8+1+1+1+1+3+1+2..so on... How do i use bc or any other method command to total count them?
[root#example httpd]# netstat -natp | grep 7143 | grep EST | awk -F' ' '{print $5}' | awk -F: '{print $1}' | sort -nr | uniq -c
7 209.139.35.xxx
8 209.139.35.xxx
1 209.139.35.xxx
1 209.139.35.xxx
1 208.46.149.xxx
3 96.17.177.xxx
1 96.17.177.xxx
2 96.17.177.xxx
You need to get the first column with awk (You don't actually need this, but I'm leaving it as a monument to my eternal shame)
awk {'print $1'}
and then use awk again to sum the column of numbers and print the result
awk '{ sum+=$1} END {print sum}'
All together:
netstat -natp | grep 7143 | grep EST | awk -F' ' '{print $5}' | awk -F: '{print $1}' | sort -nr | uniq -c | awk {'print $1'} | awk '{ sum+=$1} END {print sum}'
I know this doesn't use bc, but it gets the job done, so hopefully that's enough.
Related
I am trying to print the output of awk command with "," delimited.
Trying to get the same output using cut.
cat File1
dot|is-big|a
dot|is-round|a
dot|is-gray|b
cat|is-big|a
hot|in-summer|a
dot|is-big|a
dot|is-round|b
dot|is-gray|a
cat|is-big|a
hot|in-summer|a
Command tried :
$awk 'BEGIN{FS="|"; OFS=","} {print $1,$3}' file1.csv | sort | uniq -c
Output Got:
2 cat,a
4 dot,a
2 dot,b
2 hot,a
Desired Output:
2,cat,a
4,dot,a
2,dot,b
2,hot,a
Couple of other commands tried :
$cat file1.csv |cut --output-delimiter="|" -d'|' -f1,3 | sort | uniq -c
You need to change the delimiter to , after running uniq -c, since it's adding the first column.
awk -F'|' '{print $1, $3}' file1.csv | sort | uniq -c | awk 'BEGIN{OFS=","} {$1=$1;print}'
But you don't need to use sort | uniq -c if you're using awk, it can do the counting itself.
awk 'BEGIN{FS="|";OFS=","} {a[$1 OFS $3]++} END{for(k in a) print a[k], k}' file1.csv
I'm after some assistance in getting some stats from an nginx log file. Something is hammering our site and I can see the top ip from this awk command:
sudo awk '{ print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -nr | head -n 50
I need to be able to get a list of the urls from this top ip? Can anyone help with the best way to acheive this?
I've got the awk command to lisst the top urls here but need to put them together:
sudo awk '{ print $7}' /var/log/nginx/access.log| sort | uniq -c | sort -nr | head -n 20
Thanks
John
You can use this:
logfile="/var/log/nginx/access.log"
grep "^$(cat "${logfile}" | cut -d' ' -f1 | sort | uniq -c | sort -nr | head -n 1 | awk -F' ' '{print $2}') " "${logfile}" | cut -d' ' -f7 | sort | uniq -c | sort -nr | head -n 50
sudo awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -nr
I am trying to get the sum of the 5th column of a .csv file using bash, however the command I am using keeps getting me zero. I am piping the file through a grep to remove the column header row:
grep -v Header results.csv | awk '{sum += $5} END {print sum}'
here's how I would do it:
tail -n+2 | cut -d, -f5 | awk '{sum+=$1} END {print sum}'
or:
tail -n+2 | awk -F, '{sum+=$5} END {print sum}'
(depending on what turns out to be faster.)
md5sum file.png | awk '{print $1}' | wc -m
I get: 33
I expect it will return 32 as the length of md5 hash. After read man page and googling I still didn't find out why.
TL;DR
Use awk's length() function:
md5sum file.png | awk '{print length($1)}'
32
It's because awk will add a line feed character to the output. You can check:
md5sum file.png | awk '{print $1}' | xxd
You can tell awk to not doing that using ORS output record separator variable:
md5sum file.png | awk '{print $1}' ORS='' | wc -m
32
I would like to get the name of the person who has maximum age in a unix data file. How can I do this?
Rob,20
Tom,30
I tried this as below but it gives me only max age.
awk -F"," '{print $2}' age.txt | sort -r | head -1
$ cat file | awk -F, '{print $2,$1;}' | sort -n | tail -n1
30 Tom
$ cat file | awk -F, '{print $2,$1;}' | sort -n | tail -n1 | awk '{print $2;}'
Tom
Try perhaps
awk -F, '{if (maxage<$2) { maxage= $2; name=$1; };} END{print name}' \
age.txt
traditional:
sort -t, -nr +1 age.txt | head -1 | cut -d, -f1
POSIXy:
sort -t, -k2,2nr age.txt | head -n 1 | cut -d, -f1
i think you can easily do this using below command
echo -e "Rob,20\nTom,30\nMin,10\nMax,50" | sort -t ',' -rk 2 | head -n 1
Please comment in case of any issues.