I have been spending a lot of time trying to create a quota check script and have not gotten the results I need.
I am using a for loop to iterate an awk command to search for a value greater than 3000000.
Base of command to output quota:
for i in `awk '{print $2}' /etc/userdomains | grep -v "nobody" | sort -u`
do
quota -v -u $i
done
Output per iteration:
Disk quotas for user exampleuser (uid 2599):
Filesystem blocks quota limit grace files quota limit grace
/dev/sda1 8 0 0 10 0 0
/dev/sdb1 0 0 0 0 0 0
/dev/sdc1 57792 0 0 2511 0 0
/dev/sdd1 0 0 0 0 0 0
/dev/sde1 0 0 0 0 0 0
I intend to pipe an awk command to print line 1; field 5 AND line equal or greater than 3; field 2 if field 2 is greater than 50000
So the wanted output would be:
exampleuser
57792
OR
exampleuser 57792
So far I cannot get these results using different methods in awk.
Here are my last two tries (based off value greater than 3000000):
for i in `awk '{print $2}' /etc/userdomains | grep -v "nobody" | sort -u`
do
quota -v -u $i | awk '{ if ($2 >= 3000000) print $0 ; else;}'
done
Output:
Disk quotas for user bforrest (uid 2108):
Filesystem blocks quota limit grace files quota limit grace
Disk quotas for user bible (uid 500):
Filesystem blocks quota limit grace files quota limit grace
/dev/sdc1 12230716 0 0 10168 0 0
Disk quotas for user bigbeau (uid 1608):
Filesystem blocks quota limit grace files quota limit grace
Disk quotas for user bilgem (uid 3299):
Filesystem blocks quota limit grace files quota limit grace
Disk quotas for user billbell (uid 2872):
Filesystem blocks quota limit grace files quota limit grace
Disk quotas for user biosalus (uid 3215):
Filesystem blocks quota limit grace files quota limit grace
Disk quotas for user bkeating (uid 1104):
Filesystem blocks quota limit grace files quota limit grace
/dev/sdc1 3106480 0 0 9636 0 0
Disk quotas for user blaaraba (uid 2931):
Filesystem blocks quota limit grace files quota limit grace
Disk quotas for user blackbird (uid 1666):
Filesystem blocks quota limit grace files quota limit grace
Another one:
for i in `awk '{print $2}' /etc/userdomains | grep -v "nobody" | sort -u`
do
quota -v -u $i \
| awk '{ if (NR >= 3 && $2 >= 3000000) print $0 ; else;}' \
| cut -d "*" -f1
done
Output:
/dev/sdc1 55948456 0 0 45806 0 0
/dev/sdd1 91428904 0 0 97739 0 0
/dev/sdd1 512000
/dev/sdc1 60275820 0 0 10594 0 0
/dev/sdb1 512460
/dev/sdb1 93819732 0 0 47951 0 0
/dev/sdd1 527613532 0 0 11935 0 0
/dev/sdd1 56922524 0 0 60761 0 0
/dev/sdc1 307664
/dev/sdb1 65851960 0 0 257999 0 0
Maybe my method is totally off. Any thoughts on this?
UPDATE:
Found a better command (repquota -a) to report quota. Much more consistent since it doesn't vary depending on where files are located:
for i in `awk '{print $2}' /etc/userdomains | grep -v "nobody" | sort -u`
do
repquota -a | awk {'print $1 " " $3'} | grep -w $i \
| awk '{if ($2 >= 5000000) print $0 ; else;}'
done
Output:
a4fundjs 55948456
actifeve 12535196
aepromo 13224160
For you original input
awk 'NR==1{print $5} NR>2 && $2>50000 {print $2}'
will print
exampleuser
57792
Related
I would like to add the date for each line of the df output.
I tried:
df -m | awk '{print `date +%Y-%m`";"$1";"$2";"$3 }'
... but it doesn't work.
How can I add the date?
Here is an alternative:
df -m | awk '{print strftime("%Y-%m"), $0}'
And here is the output from the command above:
$ df -m | awk '{print strftime("%Y-%m"), $0}'
2019-10 Filesystem 1M-blocks Used Available Use% Mounted on
2019-10 devtmpfs 9852 0 9852 0% /dev
2019-10 tmpfs 9871 132 9740 2% /dev/shm
2019-10 tmpfs 9871 2 9869 1% /run
2019-10 /dev/mapper/fedora_canvas-root 50141 14731 32834 31% /
2019-10 tmpfs 9871 1 9871 1% /tmp
2019-10 /dev/sda5 976 243 667 27% /boot
2019-10 /dev/mapper/fedora_canvas-home 1277155 217435 994777 18% /home
2019-10 tmpfs 1975 63 1912 4% /run/user/1000
$
And here is an alternative version, printing just the 3 columns you wanted on the OP:
df -m | awk '{print strftime("%Y-%m"), $1, $2, $3}' | column -t
And the corresponding output:
$ df -m | awk '{print strftime("%Y-%m"), $1, $2, $3}' | column -t
2019-10 Filesystem 1M-blocks Used
2019-10 devtmpfs 9852 0
2019-10 tmpfs 9871 132
2019-10 tmpfs 9871 2
2019-10 /dev/mapper/fedora_canvas-root 50141 14731
2019-10 tmpfs 9871 1
2019-10 /dev/sda5 976 243
2019-10 /dev/mapper/fedora_canvas-home 1277155 217435
2019-10 tmpfs 1975 63
$
You may use this way:
df -m | awk -v dt=$(date "+%Y-%m") '{print dt "::", $0}'
We use -v dt=$(date "+%Y-%m") to execute date command in shell and pass it to awk in an argument dt.
If you want only first 3 columns from df command output then use:
df -m | awk -v dt=$(date "+%Y-%m") '{print dt, $1, $2, $3}'
A Perl solution.
df -m | perl -MPOSIX=strftime -alpe '$_ = strftime("%Y-%M ", localtime) . "#F[0..2]"'
Command line options:
-M : Load thestrftime() function from the POSIX module
-a : Autosplit the input records on whitespace into #F
-l : Remove newlines from input and add them to output
-p : Put each input record into $_, execute code and then print $_
-e : Run this code for each input record
The code updates $_ by concatenating the date (strftime("%Y-%M ", localtime)) with the first three columns (#F[0 .. 2]) of the input record.
I want to findout overall CPU usage and RAM usage in percentage, but i dint get success
$ command for cpu usage
4.85%
$ command for memory usage
15.15%
OR
$ command for cpu and mamory usage
cpu: 4.85%
mem: 15.15%
How can I achieve this?
You can use top and/or vmstat from the procps package.
Use vmstat -s to get the amount of RAM on your machine (optional), and
then use the output of top to calculate the memory usage percentages.
%Cpu(s): 3.8 us, 2.8 sy, 0.4 ni, 92.0 id, 1.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 24679620 total, 1705524 free, 7735748 used, 15238348 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 16161296 avail Mem
You can also do this for relatively short output:
watch '/usr/bin/top -b | head -4 | tail -2'
A shell pipe that calculates the current RAM usage periodically is
watch -n 5 "/usr/bin/top -b | head -4 | tail -2 | perl -anlE 'say sprintf(\"used: %s total: %s => RAM Usage: %.1f%%\", \$F[7], \$F[3], 100*\$F[7]/\$F[3]) if /KiB Mem/'"
(CPU + Swap usages were filtered out here.)
This command prints every 5 seconds:
Every 5.0s: /usr/bin/top -b | head -4 | tail -2 | perl -anlE 'say sprintf("u... wb3: Wed Nov 21 13:51:49 2018
used: 8349560 total: 24667856 => RAM Usage: 33.8%
Please use one of the following:
$ free -t | awk 'NR == 2 {print "Current Memory Utilization is : " $3/$2*100}'
Current Memory Utilization is : 14.6715
OR
$ free -t | awk 'FNR == 2 {print "Current Memory Utilization is : " $3/$2*100}'
Current Memory Utilization is : 14.6703
CPU usage => top -bn2 | grep '%Cpu' | tail -1 | grep -P '(....|...) id,' | awk '{print 100-$8 "%"}'
Memory usage => free -m | grep 'Mem:' | awk '{ print $3/$2*100 "%"}'
For the cpu usage% you can use:
top -b -n 1| grep Cpu | awk -F "," '{print $4}' | awk -F "id" '{print $1}' | awk -F "%" '{print $1}'
One liner solution to get RAM % in-use:
free -t | awk 'FNR == 2 {printf("%.0f%"), $3/$2*100}'
Example output: 24%
for more precision, you can change the integer N inside printf(%.<N>%) from the the previous command. For example to get 2 decimal places of precision you could do:
free -t | awk 'FNR == 2 {printf("%.2f%"), $3/$2*100}'
Example output: 24.57%
i need to get stats from my Centos 6.7 with Cpanel and send to my external monitor server. What I would like to get is an average cpu load per user or per process name in the last 3 minutes. After many research and test not found any praticable solutions apart bash run top with
top -d 180 -b -n 2 > /top.log
second iteration looks like...
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
38017 mysql 20 0 760m 265m 6324 S 1.4 14.2 244:27.08 mysqld
39501 nobody 20 0 1047m 93m 7068 S 0.1 5.0 0:06.80 httpd
54877 johnd 20 0 32728 3612 2364 S 0.0 0.2 0:00.09 imap
51530 johnd 20 0 353m 5372 1928 S 0.0 0.3 0:04.17 php-fpm
39500 nobody 20 0 1046m 79m 3656 S 0.0 4.3 0:02.57 httpd
7 root 20 0 0 0 0 S 0.0 0.0 27:47.61 events/0
39497 nobody 20 0 1046m 84m 7784 S 0.0 4.5 0:02.77 httpd
etc...
then grep (only on the second iteration output) with COMMAND or USER, sum and divide by 100 to get value like cpu-load
echo "$PRTGTOP" | grep johnd | awk '{ sum += $9; } END { print sum/100; }'
I should probably also try to count the process times etc ?, maybe there is a simpler way to achieve the same result, maybe with third-party software to generate stats?
Thanks.
top gets its info from /proc/*/stat. Each numerical directory under /proc is a process number for a currently running process.
It may be easier for you to collect data directly from those directories. The data format is well defined and can be found in man proc under the subsection called "/proc/[pid]/stat".
You can try the pidstat tool (part of the sysstat package):
pidstat -C httpd -U johnd -h -u 180 1 | awk '{ sum += $7; } END { print sum/100;}'
This will return the percentage CPU usage of all processes matching the httpd command string and the johnd user over a 180-second interval.
ok, pidstat is better, thanks!, but if USER pid is run for only few seconds no cpu use is reported. i found best result with:
#run pidstat with 10 iterations for 18 times
pidstat -U -u 10 18 > /pidstat.log
then
#sum all cpu usage and divide by 18
cat /pidstat.log | grep -v Average | grep johnd | awk '{ sum += $8; } END { print sum/100/18;}' OFMT="%3.3f"
cat /pidstat.log | grep -v Average | grep httpd | awk '{ sum += $8; } END { print sum/100/18;}' OFMT="%3.3f"
with this i get best cpu usage stat per USER even if process is run only for few seconds but with high cpu usage
Problem: the output file "single_hits.txt" is blank:
cut -f10 genome_v_trans.pslx | sort | uniq -c | grep ' 1 ' | sed -e 's/ 1 /\\\</' -e 's/$/\\\>/' > single_hits.txt
I have downloaded the script from Linux to be used on Mac OSX 10.7.5. There are some changes that need to be made as it is not working. I have nine "contigs" of DNA data that need to be filtered to remove all but unique contigs. blat is used to compare two datasets and output a .pslx file with these contigs, which worked:
964 0 0 0 0 0 3 292 + m.1 1461 0 964 3592203 ...
501 0 0 0 0 0 3 468 - m.1 1461 960 1461 5269699 ...
1168 0 0 0 1 2 7 1232 - m.7292 1170 0 1170 5233270 ...
Then this script is supposed to remove identical contigs such as the top two (m.1)
This seems to work on the limited data you gave,
grep -v `awk '{print $10}' genome_v_trans.pslx | uniq -d` genome_v_trans.pslx
unless you want it to have <> in place of the duplicates, then you can sed substitute the duplicate entries then you can do something like:
IFS=$(echo -en "\n\b") && for a in $(awk '{print $10}' genome_v_trans.pslx | uniq -d); do sed -i "s/$a/<>/g" genome_v_trans.pslx; done && unset IFS
results in:
964 0 0 0 0 0 3 292 + <> 1461 0 964 3592203 ...
501 0 0 0 0 0 3 468 - <> 1461 960 1461 5269699 ...
1168 0 0 0 1 2 7 1232 - m.7292 1170 0 1170 5233270 ...
or if you wanted that in the singlehits file:
IFS=$(echo -en "\n\b") && for a in $(awk '{print $10}' dna.txt | uniq -d); do sed "s/$a/<>/g" dna.txt >> singlehits.txt; done && unset IFS
SINGLE_TMP=/tmp/_single_tmp_$$ && awk '{if ($10 == "<>") print}' singlehits.txt > "$SINGLE_TMP" && mv "$SINGLE_TMP" singlehits.txt && unset SINGLE_TMP
or more elegant: sed -ni '/<>/p' singlehits.txt
singlehits.txt:
964 0 0 0 0 0 3 292 + <> 1461 0 964 3592203 ...
501 0 0 0 0 0 3 468 - <> 1461 960 1461 5269699 ...
Input: df -k
Output:
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 10332220 443748 9785150 5% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 45475864 1688 45474176 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/dev/dsk/c0t0d0s3 10332220 3513927 6714971 35% /usr
I want to omit the 1st line Filesystem kbytes used avail capacity Mounted on from the output.
I used df -k | tail -n+2 in linux to get exactly what i wanted, but in SunOs I get
zenvo% df -k | tail -n+2
usage: tail [+/-[n][lbc][f]] [file]
tail [+/-[n][l][r|f]] [file]
How can i achieve the Required output:
/dev/dsk/c0t0d0s0 10332220 443748 9785150 5% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 45475864 1688 45474176 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/dev/dsk/c0t0d0s3 10332220 3513927 6714971 35% /usr
Note: No. of rows might change
I know it's an old thread, but the shortest and the clearest of all:
df -k | sed 1d
I haven't used SunOS but using sed you should be able to delete the first line like this:
df -k | sed -e /Filesystem/d
edit: But you would have to be careful that the word Filesystem doesn't show up elsewhere in the output. A better solution would be:
df -k | sed -e /^Filesystem/d
If you want to omit the first line of any result, you can use tail:
<command> | tail -n +2
So in your case:
df -k | tail -n +2
https://man7.org/linux/man-pages/man1/tail.1.html
What about:
df -k | tail -$((`df -k | wc -l`-1))